This one is simple enough, yet I still see a lot of constants defined using macros. Always use static const or constexpr rather than a define. If your build process involves setting a set of variable such as a version number or a git hash, consider generating a source file rather than using defines as build parameters. The above snippet is from the Win32 API. If you need lazy evaluation of the function arguments, use a lambda. Properly isolating the platform-specific nastiness in separate files, separate libraries and methods should reduce the occurrence of ifdef blocks in your code.
And while it does not solve the issues I mentioned above you are less likely to want to rename or otherwise transform a platform-specific symbol while not working on that platform.
If you have optional dependencies that enable some feature of your software considering using a plugins system or separate your projects in several, unconditionally build components and applications rather than using ifdef to disable some code paths when the dependency is missing. Make sure to test your build with and without that dependency. To avoid the hassle, consider never making your dependency optional. Remember, not compiled code is broken code. Even more so than dependencies, features should never be optional at compile time.
Mom, where do flashvars come from?
Provide runtime flags or a plugin system. Using pragma once is less error-prone, easier and faster. Kiss the include guards goodbye. Macros should be undefined with undef as soon as possible. A macro name should stand from the rest of the code and not conflict with boost::signal or std::min. The above code has a a few issues. And, it also happen to be broken.
While modules should improve compile times they do also offer a barrier from which macros cannot leak. In the beginning of there are no production ready compiler with that feature but GCC, MSVC and clang have implemented it or are in the process to. When the disabled code-path is well-formed does not refers to unknown symbols , if constexpr is a better alternative to ifdef since the disabled code path will still be part of the AST and checked by the compiler and your tools, including your static analyzer and refactoring programs.
Use them if you need to. My advice is to stick to the features offered by every and all compilers your target.
Chose a baseline an stick with it. Everybody like to write their own logger.
ifdef Considered Harmful, or Portability Experience With C News
A few facilities offer better alternatives to some macro usages, but realistically, you will still have to resort to the preprocessor, sooner than later. But fortunately, there is still a lot we can do. One of the most frequent use case for define is to query the build environment. We can imagine having a set of constants exposed through a std::compiler to expose some of these build environment variables.
In the same vein, we can imagine having some kind of extern compiler constexpr variables declared in the source code but defined or overwritten by the compiler. My hope is that giving more information to the compiler about what the various configuration variables are and maybe even what combination of variables are valid would lead to a better modeling and therefore tooling and static analysis of the source code.
[PDF] #ifdef Considered Harmful, or Portability Experience with C News - Semantic Scholar
The Rust community never misses an occasion to promote fiercely the merits of the Rust language. And indeed, Rust does a lot of things really well. And compile time configuration is one of them. Using an attribute system to conditionally include a symbol in the compilation unit is a very interesting idea indeed. Second, even if a symbol is not to be included in the build, we can still attempt to parse it, and more importantly, the sole declaration gives the compiler sufficient information about the entity to enable powerful tooling, static analysis and refactoring.
Because the compiler knows that f is a valid entity and that it is a function name, it can unambiguously parse the body of the discarded if constexpr statement. Here the compiler could only parse the left hand side since the rest is not needed for static analysis or tooling.
Of course, referencing a discarded declaration from an active code path would be ill formed, but the compiler could check that it never happens for any valid configuration. Breaking the windows build because you wrote your code on a Linux machine would become much harder. We may also want a compile time Random Number Generator.
Why not? Who cares about reproducible builds anyway? The metaclasses proposal is the best thing since sliced bread, modules and concepts. In particular P is an amazing paper on many regards. One of the many constructs introduced is the declname keyword that creates an identifier from an arbitrary sequence of strings and digits. Given that string concatenation to form new identifiers is one of the most frequent use case for macros, this is very interesting indeed. Hopefully the compiler would have enough information on the symbols created or refereed to this way to still index them properly.
The infamous X macro should also become a thing of the past in the coming years. Since macro are just text replacement, their arguments are lazily evaluated.
Header File Inclusion
So, could we benefit from lazy evaluation in functions? But a macro which is checked at the syntax level rather than the token source the preprocessor offers. This is part of The meta class proposal. A constexpr block is a compound statement in which all the variables are constexpr and free of side effects.
The only purpose of that block is to create injection fragments and modify the properties of the entity in which the block is declared, at compile time. Within the constexpr block we define a macro log. Notice that macro are not functions.
It only takes a minute to sign up. Practically every text on code quality I've read agrees that commented out code is a bad thing. The usual example is that someone changed a line of code and left the old line there as a comment, apparently to confuse people who read the code later on.
- Fear (Gone, Book 5).
- Quick Search.
- R Cookbook.
- Header File Inclusion!
Of course, that's a bad thing. But I often find myself leaving commented out code in another situation: I write a computational-geometry or image processing algorithm. To understand this kind of code, and to find potential bugs in it, it's often very helpful to display intermediate results e. Looking at these values in the debugger usually means looking at a wall of numbers coordinates, raw pixel values. Not very helpful. Writing a debugger visualizer every time would be overkill. I don't want to leave the visualization code in the final product it hurts performance, and usually just confuses the end user , but I don't want to lose it, either.
So, most of the time, I just leave the visualization code commented out, with a comment saying what is being visualized.
#ifdef considered harmful
When I read the code a year later, I'm usually happy I can just uncomment the visualization code and literally "see what's going on". Are you somehow "over-generalizing" all commented code to include debugging as well as senseless, obsolete code? Why are you making that overly-generalized conclusion? I've looked at the paragraph in the book again p. So I wondered if this rule is over-generalizing or if I misunderstood the book or if what I do is bad practice, for some reason I didn't know.
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.