Discussion with Lincoln Yu about safeguarding wheels with TDD
Decheng "Robbie" Fan
Yeah, making one's own wheels is a virtue of a programmer! I do believe in this. That's why I'm now looking into TDD as a part of the solution. You know for personal programs it takes a lot of time to test it. TDD can't replace code review--code quality itself still needs to be ensured manually--but it can replace a lot of the test work. This would help a lot when enhancing the wheels with new features.
Regarding how to use TDD effectively:
1. Do pair programming only for critical code. I like TDD but do not like pair programming with regards to final quality (not just because the programming efficiency is lower, but also because pair programming is a dynamic process, which involves little code review). Critical code can be worked through pair programming because feedback is important for it;
1.1 Do pair revision. Write code separately with your colleague, and finally review each other's code. Write review feedback, and do pair revision (update the code) based on the review feedback;
2. The "evolution" concept of TDD is valuable. It helps you grow the test cases and the implementation code gradually, accumulating a lot of valuable test cases and letting you think about the implementation logic from easy to difficult, and from incomplete to complete. Sometimes this evolution requires you to throw away a whole incomplete implementation, because it's incompatible with the complete solution. But it has already made you think more, which is good. Write unit tests--tests red--update the implementation--tests green--write more unit tests--repeat until implementation done. After all unit tests are built, add more test cases by using traditional techniques (Book: How We Test Software at Microsoft) to ensure better test strength, and perform a complete code review (implementation should be reviewed carefully; unit tests should be gone through. Reference: http://www.fandecheng.com/personal/interests/programming/program-error-prev.htm);
3. I have used some mock frameworks and have discovered that mock frameworks (aka mocks) are more evil than mock classes (aka fakes). Mock classes can be debugged and are flexible, while mock frameworks are rigid, introducing higher coupling between implementation and tests and can't be debugged. One step further, single layer mock classes are OK if we just use them to isolate the implementation details of the depended-on components (DOCs, including libraries and frameworks), but they are not enough if we want to do integration test. So I would do two things:
3.1 For unit tests that do use single layer mock classes, perform complete DOC tests first (manually or automatically) to verify their behavior, and record their behavior down (as comments in the unit tests). For example, write experiment code to test NTFS behavior, and write down how NTFS behaves. Then build the mock classes based on the behavior observed;
3.2 For unit tests that do integration test, it means that we need to implement "deep and real" mock classes. These mock classes are very like real implementations of the DOCs. The only difference is that they use virtualized storage (such as a database dedicated to unit test rather than a real one) and environment (such as a virtual network server dedicated to unit test, but not a real one). Such virtualized storage and environment are automatically initialized before each test case, so that the repeatability of the unit tests can be kept. For example, to do integration test of your application with FAT32, implement an in-memory version of FAT32 and perform the test.