question_id
int64 25
74.7M
| answer_id
int64 332
74.7M
| title
stringlengths 20
150
| question
stringlengths 23
4.1k
| answer
stringlengths 20
4.1k
|
---|---|---|---|---|
119,999 | 120,234 |
How to setup a shared ccache
|
How can I setup a shared ccache without falling into a permissions problem?
I would like to run a nightly or CI build with latest changes and share all created binaries throughout the R&D using a large ccache repository.
|
The easiest solution: create a new group (e.g. "devel"), and make all developers members of it. Give read/write permissions to that group on the directory hierarchy where the cache is maintained. The developers will also need to fix their umask.
|
120,033 | 120,084 |
Any workarounds for non-static member array initialization?
|
In C++, it's not possible to initialize array members in the initialization list, thus member objects should have default constructors and they should be properly initialized in the constructor. Is there any (reasonable) workaround for this apart from not using arrays?
[Anything that can be initialized using only the initialization list is in our application far preferable to using the constructor, as that data can be allocated and initialized by the compiler and linker, and every CPU clock cycle counts, even before main. However, it is not always possible to have a default constructor for every class, and besides, reinitializing the data again in the constructor rather defeats the purpose anyway.]
E.g. I'd like to have something like this (but this one doesn't work):
class OtherClass {
private:
int data;
public:
OtherClass(int i) : data(i) {}; // No default constructor!
};
class Foo {
private:
OtherClass inst[3]; // Array size fixed and known ahead of time.
public:
Foo(...)
: inst[0](0), inst[1](1), inst[2](2)
{};
};
The only workaround I'm aware of is the non-array one:
class Foo {
private:
OtherClass inst0;
OtherClass inst1;
OtherClass inst2;
OtherClass *inst[3];
public:
Foo(...)
: inst0(0), inst1(1), inst2(2) {
inst[0]=&inst0;
inst[1]=&inst1;
inst[2]=&inst2;
};
};
Edit: It should be stressed that OtherClass has no default constructor, and that it is very desirable to have the linker be able to allocate any memory needed (one or more static instances of Foo will be created), using the heap is essentially verboten. I've updated the examples above to highlight the first point.
|
One possible workaround is to avoid the compiler calling the OtherClass constructor at all, and to call it on your own using placement new to initialize it whichever way you need. Example:
class Foo
{
private:
char inst[3*sizeof(OtherClass)]; // Array size fixed. OtherClass has no default ctor.
// use Inst to access, not inst
OtherClass &Inst(int i) {return (OtherClass *)inst+i;}
const OtherClass &Inst(int i) const {return (const OtherClass *)inst+i;}
public:
Foo(...)
{
new (Inst(0)) OtherClass(...);
new (Inst(1)) OtherClass(...);
new (Inst(2)) OtherClass(...);
}
~Foo()
{
Inst(0)->~OtherClass();
Inst(1)->~OtherClass();
Inst(2)->~OtherClass();
}
};
To cater for possible alignment requirements of the OtherClass, you may need to use __declspec(align(x)) if working in VisualC++, or to use a type other than char like:
Type inst[3*(sizeof(OtherClass)+sizeof(Type)-1)/sizeof(Type)];
... where Type is int, double, long long, or whatever describes the alignment requirements.
|
120,876 | 120,916 |
What are the rules for calling the base class constructor?
|
What are the C++ rules for calling the base class constructor from a derived class?
For example, I know in Java, you must do it as the first line of the subclass constructor (and if you don't, an implicit call to a no-arg super constructor is assumed - giving you a compile error if that's missing).
|
Base class constructors are automatically called for you if they have no argument. If you want to call a superclass constructor with an argument, you must use the subclass's constructor initialization list. Unlike Java, C++ supports multiple inheritance (for better or worse), so the base class must be referred to by name, rather than "super()".
class SuperClass
{
public:
SuperClass(int foo)
{
// do something with foo
}
};
class SubClass : public SuperClass
{
public:
SubClass(int foo, int bar)
: SuperClass(foo) // Call the superclass constructor in the subclass' initialization list.
{
// do something with bar
}
};
More info on the constructor's initialization list here and here.
|
120,957 | 121,014 |
C++ usage in embedded systems
|
What features of C++ should be avoided in embedded systems?
Please classify the answer by reason such as:
memory usage
code size
speed
portability
EDIT: Lets' use an ARM7TDMI with 64k ram as a target to control the scope of the answers.
|
RTTI and Exception Handling:
Increases code-size
Decreases performance
Can often be replaced by cheaper mechanisms or a better software-design.
Templates:
be careful with them if code-size is an issue. If your target CPU has no or only a very tiny ínstruction cache it may reduce the performance as well. (templates tend to bloat code if used without care). Otoh clever meta-programming can decrease the code-size as well. There is no clear cut answer on his.
Virtual functions and inheritance:
These are fine for me. I write almost all of my embedded code in C. That does not stop me from using function-pointer tables to mimic virtual functions. They never became a peformance problem.
|
121,162 | 121,163 |
What does the explicit keyword mean?
|
What does the explicit keyword mean in C++?
|
The compiler is allowed to make one implicit conversion to resolve the parameters to a function. What this means is that the compiler can use constructors callable with a single parameter to convert from one type to another in order to get the right type for a parameter.
Here's an example class with a constructor that can be used for implicit conversions:
class Foo
{
private:
int m_foo;
public:
// single parameter constructor, can be used as an implicit conversion
Foo (int foo) : m_foo (foo) {}
int GetFoo () { return m_foo; }
};
Here's a simple function that takes a Foo object:
void DoBar (Foo foo)
{
int i = foo.GetFoo ();
}
and here's where the DoBar function is called:
int main ()
{
DoBar (42);
}
The argument is not a Foo object, but an int. However, there exists a constructor for Foo that takes an int so this constructor can be used to convert the parameter to the correct type.
The compiler is allowed to do this once for each parameter.
Prefixing the explicit keyword to the constructor prevents the compiler from using that constructor for implicit conversions. Adding it to the above class will create a compiler error at the function call DoBar (42). It is now necessary to call for conversion explicitly with DoBar (Foo (42))
The reason you might want to do this is to avoid accidental construction that can hide bugs.
Contrived example:
You have a MyString class with a constructor that constructs a string of the given size. You have a function print(const MyString&) (as well as an overload print (char *string)), and you call print(3) (when you actually intended to call print("3")). You expect it to print "3", but it prints an empty string of length 3 instead.
|
121,326 | 121,370 |
What does the GDB backtrace message "0x0000000000000000 in ?? ()" mean?
|
What does it mean when it gives a backtrace with the following output?
#0 0x00000008009c991c in pthread_testcancel () from /lib/libpthread.so.2
#1 0x00000008009b8120 in sigaction () from /lib/libpthread.so.2
#2 0x00000008009c211a in pthread_mutexattr_init () from /lib/libpthread.so.2
#3 0x0000000000000000 in ?? ()
The program has crashed with a standard signal 11, segmentation fault.
My application is a multi-threaded FastCGI C++ program running on FreeBSD 6.3, using pthread as the threading library.
It has been compiled with -g and all the symbol tables for my source are loaded, according to info sources.
As is clear, none of my actual code appears in the trace but instead the error seems to originate from standard pthread libraries. In particular, what is ?? () ????
EDIT: eventually tracked the crash down to a standard invalid memory access in my main code. Doesn't explain why the stack trace was corrupted, but that's a question for another day :)
|
gdb wasn't able to extract the proper return address from pthread_mutexattr_init; it got an address of 0. The "??" is the result of looking up address 0 in the symbol table. It cannot find a symbolic name, so it prints a default "??"
Unfortunately right offhand I don't know why it could not extract the correct return address.
|
121,757 | 121,805 |
How do you implement Coroutines in C++
|
I doubt it can be done portably, but are there any solutions out there? I think it could be done by creating an alternate stack and reseting SP,BP, and IP on function entry, and having yield save IP and restore SP+BP. Destructors and exception safety seem tricky but solvable.
Has it been done? Is it impossible?
|
Yes it can be done without a problem. All you need is a little assembly code to move the call stack to a newly allocated stack on the heap.
I would look at the boost::coroutine library.
The one thing that you should watch out for is a stack overflow. On most operating systems overflowing the stack will cause a segfault because virtual memory page is not mapped. However if you allocate the stack on the heap you don't get any guarantee.
Just keep that in mind.
|
121,787 | 121,818 |
What is the STL implementation with the lowest memory footprint?
|
I am working on a very large scale computing library that is using STL heavily. The library is being built using MSVC2003 and it is using its STL implementation.
I am looking for an alternative STL implementation that would help the library lower its memory requirements and increase its performance.
It is not possible to switch to a newer version of MSVC for the moment.
I would like some feedback on real world usage not based on benchmarks if possible.
EDIT: To make it a little clearer, for example some STL implementation (like STLSoft) are proposing specific optimizations for string concatenation; these might sounds small in impact but they can lead to large improvements. STLPort is another good example where they clearly state their goal: having the fastest STL implementation around, there is the stdlib++, etc ... all of these can be good candidates but I have no time to test them all, i require some community help on that.
|
STLPort. Haven't measured memory usage differences, but it's definitely quicker (yes, real world usage).
|
122,316 | 122,368 |
Template Constraints C++
|
In C# we can define a generic type that imposes constraints on the types that can be used as the generic parameter. The following example illustrates the usage of generic constraints:
interface IFoo
{
}
class Foo<T> where T : IFoo
{
}
class Bar : IFoo
{
}
class Simpson
{
}
class Program
{
static void Main(string[] args)
{
Foo<Bar> a = new Foo<Bar>();
Foo<Simpson> b = new Foo<Simpson>(); // error CS0309
}
}
Is there a way we can impose constraints for template parameters in C++.
C++0x has native support for this but I am talking about current standard C++.
|
As someone else has mentioned, C++0x is getting this built into the language. Until then, I'd recommend Bjarne Stroustrup's suggestions for template constraints.
Edit: Boost also has an alternative of its own.
Edit2: Looks like concepts have been removed from C++0x.
|
122,455 | 122,485 |
Handling file paths cross platform
|
Do any C++ GNU standalone classes exist which handle paths cross platform? My applications build on Windows and LInux. Our configuration files refer to another file in a seperate directory. I'd like to be able to read the path for the other configuration file into a class which would work on both Linux or Windows.
Which class would offer the smallest footprint to translate paths to use on either system? Thanks
|
Unless you're using absolute paths, there's no need to translate at all - Windows automatically converts forward slashes into backslashes, so if you use relative paths with forward slash path separators, you'll be golden. You should really avoid absolute paths if at all possible.
|
122,782 | 123,233 |
How can I wrap BOOST in a separate namespace?
|
I'm looking to have two versions of BOOST compiled into a project at the same time. Ideally they should be usable along these lines:
boost_1_36_0::boost::shared_ptr<SomeClass> someClass = new SomeClass();
boost_1_35_0::boost::regex expression("[0-9]", boost_1_35_0::boost::regex_constants::basic);
|
I read (well scanned) through the development list discussion. There's no easy solution. To sum up:
Wrapping header files in a namespace declaration
namespace boost_1_36_0 {
#include <boost_1_36_0/boost/regex.hpp>
}
namespace boost_1_35_0 {
#include <boost_1_35_0/boost/shared_ptr.hpp>
}
Requires modifying source files
Doesn't allow for both versions to be included in the same translation unit, due to the fact that macros do not respect namespaces.
Defining boost before including headers
#define boost boost_1_36_0
#include <boost_1_36_0/boost/regex.hpp>
#undef boost
#define boost boost_1_35_0
#include <boost_1_35_0/boost/shared_ptr.hpp>
#undef boost
Source files can simply be compiled with -Dboost=boost_1_36_0
Still doesn't address macro conflicts in a single translation unit.
Some internal header file inclusions may be messed up, since this sort of thing does happen.
#if defined(SOME_CONDITION)
# define HEADER <boost/some/header.hpp>
#else
# define HEADER <boost/some/other/header.hpp>
#endif
But it may be easy enough to work around those cases.
Modifying the entire boost library to replace namespace boost {..} with namespace boost_1_36_0 {...} and then providing a namespace alias. Replace all BOOST_XYZ macros and their uses with BOOST_1_36_0_XYZ macros.
This would likely work if you were willing to put into the effort.
|
122,886 | 122,937 |
Fast plane rotation algorithm?
|
I am working on an application that detects the most prominent rectangle in an image, then seeks to rotate it so that the bottom left of the rectangle rests at the origin, similar to how IUPR's OSCAR system works. However, once the most prominent rectangle is detected, I am unsure how to take into account the depth component or z-axis, as the rectangle won't always be "head-on". Any examples to further my understanding would be greatly appreciated. Seen below is an example from IUPR's OSCAR system.
alt text http://quito.informatik.uni-kl.de/oscar/oscar.php?serverimage=img_0324.jpg&montage=use
|
You don't actually need to deal with the 3D information in this case, it's just a mappping function, from one set of coordinates to another.
Look at affine transformations, they're capable of correcting simple skew and perspective effects. You should be able to find code somewhere that will calculate a transform from the 4 points at the corners of your rectangle.
Almost forgot - if "fast" is really important, you could simplify the system to only use simple shear transformations in combination, though that'll have a bad impact on image quality for highly-tilted subjects.
|
123,012 | 123,063 |
Do you use Qt and why do you use it?
|
Pros. and cons? how long do you use it? What about jambi?
|
I've used Qt on a couple of projects I did in c++ on several platforms over a period of seven years. I think it works pretty well and definitely was quicker for me to develop a decent GUI app on the Mac than plodding through a language I didn't know (Objective-C) at the time.
I think the signal/slot mechanism is a bit funky but isn't horrible. Once you're use it for a bit, it's not a show stopper. The connection stuff is easy to bungle up (or at least it was) and it's always good to check the return on those because your app will go merrily on its way and not tell you that it didn't work.
I've never used jambi.
|
123,758 | 123,765 |
How do I remove code duplication between similar const and non-const member functions?
|
Let's say I have the following class X where I want to return access to an internal member:
class Z
{
// details
};
class X
{
std::vector<Z> vecZ;
public:
Z& Z(size_t index)
{
// massive amounts of code for validating index
Z& ret = vecZ[index];
// even more code for determining that the Z instance
// at index is *exactly* the right sort of Z (a process
// which involves calculating leap years in which
// religious holidays fall on Tuesdays for
// the next thousand years or so)
return ret;
}
const Z& Z(size_t index) const
{
// identical to non-const X::Z(), except printed in
// a lighter shade of gray since
// we're running low on toner by this point
}
};
The two member functions X::Z() and X::Z() const have identical code inside the braces. This is duplicate code and can cause maintenance problems for long functions with complex logic.
Is there a way to avoid this code duplication?
|
Yes, it is possible to avoid the code duplication. You need to use the const member function to have the logic and have the non-const member function call the const member function and re-cast the return value to a non-const reference (or pointer if the functions returns a pointer):
class X
{
std::vector<Z> vecZ;
public:
const Z& z(size_t index) const
{
// same really-really-really long access
// and checking code as in OP
// ...
return vecZ[index];
}
Z& z(size_t index)
{
// One line. One ugly, ugly line - but just one line!
return const_cast<Z&>( static_cast<const X&>(*this).z(index) );
}
#if 0 // A slightly less-ugly version
Z& Z(size_t index)
{
// Two lines -- one cast. This is slightly less ugly but takes an extra line.
const X& constMe = *this;
return const_cast<Z&>( constMe.z(index) );
}
#endif
};
NOTE: It is important that you do NOT put the logic in the non-const function and have the const-function call the non-const function -- it may result in undefined behavior. The reason is that a constant class instance gets cast as a non-constant instance. The non-const member function may accidentally modify the class, which the C++ standard states will result in undefined behavior.
|
124,623 | 124,676 |
Computing pseudo-inverse of a matrix in C++
|
I'm looking to compute the Moore-Penrose pseudo-inverse of a matrix in C++, can someone point me to a library implementation or a numerical recipe?
Thanks!
|
You need 'Single Value Decomposition', from which you can find a C implementation here from Numerical Recipes in C.
This other site describes how to use single value decomposition to calculate the pseudo-inverse.
|
124,880 | 124,904 |
Is it possible to prevent stack allocation of an object and only allow it to be instantiated with 'new'?
|
Is it possible to prevent stack allocation of an object and only allow it to be instiated with 'new' on the heap?
|
One way you could do this would be to make the constructors private and only allow construction through a static method that returns a pointer. For example:
class Foo
{
public:
~Foo();
static Foo* createFoo()
{
return new Foo();
}
private:
Foo();
Foo(const Foo&);
Foo& operator=(const Foo&);
};
|
124,966 | 124,970 |
Existence map in C++
|
I want something like an std::map, but I only want to see if the item exists or not, I don't actually need a key AND a value. What should I use?
|
Looks like you need a std::set.
|
125,094 | 125,454 |
Transparently swapping pointers to character arrays in C++
|
I have a 2D character array:
char nm[MAX1][MAX2] = { "john", "bob", "david" };
I want to swap two of these elements (without std::swap) by simply writing
swapPointers(nm[0], nm[1]);
where swapPointers looks like this
void swapPointers(char *&a, char *&b)
{
char *temp = a;
a = b;
b = a;
}
However, this does not compile (and while adding casts makes it compile, the pointers end up pointing to wrong/weird locations).
Can anybody help?
Thanks!
|
Zan is close, but his problem is that his 'swap' function can take any pointer to characters. This can cause problems if misused. Here is a safer version:
void swap(char (&x)[MAX2], char (&y)[MAX2])
{
char temp[MAX2];
memcpy(temp, x, MAX2);
memcpy(x, y, MAX2);
memcpy(y, temp, MAX2);
}
There is also a misunderstanding on the part of the poster: 'nm' is a 2-dimensional array of characters. There are no pointers. nm[0], nm[2], etc... are also not pointers either -- they are still (1-dimensional) arrays. The fact that 1-dimensional arrays are implicitly convertible to pointers causes this type of confusion among many C and C++ programmers.
In order to swap the data in the 2-dimensional array, you have to swap blocks of memory of size MAX2 -- as indicated by both 'swap' functions Zan and I wrote.
|
125,124 | 125,225 |
C++ Passing Options To Executable
|
How do you pass options to an executable? Is there an easier way than making the options boolean arguments?
EDIT: The last two answers have suggested using arguments. I know I can code a workable solution like that, but I'd rather have them be options.
EDIT2: Per requests for clarification, I'll use this simple example:
It's fairly easy to handle arguments because they automatically get parsed into an array.
./printfile file.txt 1000
If I want to know what the name of the file the user wants to print, I access it via argv[1].
Now about how this situation:
./printfile file.txt 1000 --nolinebreaks
The user wants to print the file with no line breaks. This is not required for the program to be able to run (as the filename and number of lines to print are), but the user has the option of using if if s/he would like. Now I could do this using:
./printfile file.txt 1000 true
The usage prompt would inform the user that the third argument is used to determine whether to print the file with line breaks or not. However, this seems rather clumsy.
|
You seem to think that there is some fundamental difference between "options" that start with "--" and "arguments" that don't. The only difference is in how you parse them.
It might be worth your time to look at GNU's getopt()/getopt_long() option parser. It supports passing arguments with options such as --number-of-line-breaks 47.
|
125,580 | 125,605 |
What are the advantages of using the C++ Boost libraries?
|
So, I've been reading through and it appears that the Boost libraries get used a lot in practice (not at my shop, though). Why is this? and what makes it so wonderful?
|
Boost is used so extensively because:
It is open-source and peer-reviewed.
It provides a wide range of platform agnostic functionality that STL missed.
It is a complement to STL rather than a replacement.
Many of Boost developers are on the C++ standard committee. In fact, many parts of Boost is considered to be included in the next C++ standard library.
It is documented nicely.
Its license allows inclusion in open-source and closed-source projects.
Its features are not usually dependent on each other so you can link only the parts you require. [Luc Hermitte's comment]
|
125,597 | 125,811 |
Boost dependency for a C++ open source project?
|
Boost is meant to be the standard non-standard C++ library that every C++ user can use. Is it reasonable to assume it's available for an open source C++ project, or is it a large dependency too far?
|
Basically your question boils down to “is it reasonable to have [free library xyz] as a dependency for a C++ open source project.”
Now consider the following quote from Stroustrup and the answer is really a no-brainer:
Without a good library, most interesting tasks are hard to do in
C++; but given a good library, almost any task can be made easy
Assuming that this is correct (and in my experience, it is) then writing a reasonably-sized C++ project without dependencies is downright unreasonable.
Developing this argument further, the one C++ dependency (apart from system libraries) that can reasonably be expected on a (developer's) client system is the Boost libraries.
I know that they aren't but it's not an unreasonable presumption for a software to make.
If a software can't even rely on Boost, it can't rely on any library.
|
125,806 | 128,327 |
Capturing Input in Linux
|
First, yes I know about this question, but I'm looking for a bit more information that that. I have actually, a fairly similar problem, in that I need to be able to capture input for mouse/keyboard/joystick, and I'd also like to avoid SDL if at all possible. I was more or less wondering if anyone knows where I can get some decent primers on handling input from devices in Linux, perhaps even some tutorials. SDL works great for cross-platform input handling, but I'm not going to be using anything else at all from SDL, so I'd like to cut it out altogether. Suggestion, comments, and help are all appreciated. Thanks!
Edit for clarity:
The point is to capture mouse motion, keyboard press/release, mouse clicks, and potentially joystick handling for a game.
|
Using the link below look at the function void kGUISystemX::Loop(void)
This is my main loop for getting input via keyboard and mouse using X Windows on Linux.
http://code.google.com/p/kgui/source/browse/trunk/kguilinux.cpp
Here is a snippet:
if(XPending(m_display))
{
XNextEvent(m_display, &m_e);
switch(m_e.type)
{
case MotionNotify:
m_mousex=m_e.xmotion.x;
m_mousey=m_e.xmotion.y;
break;
case ButtonPress:
switch(m_e.xbutton.button)
{
case Button1:
m_mouseleft=true;
break;
case Button3:
m_mouseright=true;
break;
case Button4:/* middle mouse wheel moved */
m_mousewheel=1;
break;
case Button5:/* middle mouse wheel moved */
m_mousewheel=-1;
break;
}
break;
case ButtonRelease:
switch(m_e.xbutton.button)
{
case Button1:
m_mouseleft=false;
break;
case Button3:
m_mouseright=false;
break;
}
break;
case KeyPress:
{
XKeyEvent *ke;
int ks;
int key;
ke=&m_e.xkey;
kGUI::SetKeyShift((ke->state&ShiftMask)!=0);
kGUI::SetKeyControl((ke->state&ControlMask)!=0);
ks=XLookupKeysym(ke,(ke->state&ShiftMask)?1:0);
......
|
125,880 | 125,899 |
Can anyone recommend a C++ std::map replacement container?
|
Maps are great to get things done easily, but they are memory hogs and suffer from caching issues. And when you have a map in a critical loop that can be bad.
So I was wondering if anyone can recommend another container that has the same API but uses lets say a vector or hash implementation instead of a tree implementation. My goal here is to swap the containers and not have to rewrite all the user code that relies on the map.
Update: performance wise the best solution would be a tested map facade on a std::vector
|
See Loki::AssocVector and/or hash_map (most of STL implementations have this one).
|
126,279 | 126,285 |
C99 stdint.h header and MS Visual Studio
|
To my amazement I just discovered that the C99 stdint.h is missing from MS Visual Studio 2003 upwards. I'm sure they have their reasons, but does anyone know where I can download a copy? Without this header I have no definitions for useful types such as uint32_t, etc.
|
Turns out you can download a MS version of this header from:
https://github.com/mattn/gntp-send/blob/master/include/msinttypes/stdint.h
A portable one can be found here:
http://www.azillionmonkeys.com/qed/pstdint.h
Thanks to the Software Ramblings blog.
NB: The Public Domain version of the header, mentioned by Michael Burr in a comment, can be find as an archived copy here. An updated version can be found in the Android source tree for libusb_aah.
|
126,297 | 126,446 |
Automatic Casts redux
|
After I messed up the description of my previous post on this I have sat down and tried to convey my exact intent.
I have a class called P which performs some distinct purpose. I also have PW which perform some distinct purpose on P. PW has no member variables, just member functions.
From this description you would assume that the code would follow like this:
class P
{
public:
void a( );
};
class PW
{
public:
PW( const P& p ) : p( p ) { }
void b( );
P& p;
};
class C
{
public:
P GetP( ) const { return p; }
private:
P p;
};
// ...
PW& p = c.GetP( ); // valid
// ...
However that brings up a problem. I can't call the functions of P without indirection everywhere.
// ...
p->p->a( )
// ...
What I would like to do is call p->a( ) and have it automatically determine that I would like to call the member function of P.
Also having a member of PW called a doesn't really scale - what if I add (or remove) another function to P - this will need to be added (or removed) to PW.
|
You could try overriding operator* and operator-> to return access to the embedded p.
Something like this might do the trick :
class P
{
public:
void a( ) { std::cout << "a" << std::endl; }
};
class PW
{
public:
PW(P& p) : p(p) { }
void b( ) { std::cout << "b" << std::endl; }
P & operator*() { return p; }
P * operator->() { return &p; }
private:
P & p;
};
class C
{
public:
P & getP() { return p; }
private:
P p;
};
int main()
{
C c;
PW pw(c.getP());
(*pw).a();
pw->a();
pw.b();
return EXIT_SUCCESS;
}
This code prints
a
a
b
However, this method may confuse the user since the semantic of operator* and operator-> becomes a little messed up.
|
126,751 | 127,108 |
Compilation fails randomly: "cannot open program database"
|
During a long compilation with Visual Studio 2005 (version 8.0.50727.762), I sometimes get the following error in several files in some project:
fatal error C1033: cannot open program database 'v:\temp\apprtctest\win32\release\vc80.pdb'
(The file mentioned is either vc80.pdb or vc80.idb in the project's temp dir.)
The next build of the same project succeeds. There is no other Visual Studio open that might access the same files.
This is a serious problem because it makes nightly compilation impossible.
|
It is possible that an antivirus or a similar program is touching the pdb file on write - an antivirus is the most likely suspect in this scenario. I'm afraid that I can only give you some general pointers, based on my past experience in setting nightly builds in our shop. Some of these may sound trivial, but I'm including them for the sake of completion.
First and foremost: make sure you start up with a clean slate. That is, force-delete the output directory of the build before you start your nightly.
If you have an antivirus, antispyware or other such programs on your nightly machine, consider removing them. If that's not an option, add your obj folder to the exclusion list of the program.
(optional) Consider using tools such as VCBuild or MSBuild as part of your nightly. I think it's better to use MSBuild if you're on a multicore machine. We use IncrediBuild for nightlies and MSBuild for releases, and never encountered the problem you describe.
If nothing else works, you can schedule a watchdog script a few hours after the build starts and check its status; if the build fails, the watchdog should restart it. This is an ugly hack, but it's better than nothing.
|
126,800 | 126,847 |
Is there a way to determine if an exception is occurring?
|
In a destructor, is there a way to determine if an exception is currently being processed?
|
You can use std::uncaught_exception(), but it might not do what you think it does: see GoTW#47 for more information.
|
126,966 | 127,182 |
Is there a good lightweight multiplatform C++ timer queue?
|
What I'm looking for is a simple timer queue possibly with an external timing source and a poll method (in this way it will be multi-platform). Each enqueued message could be an object implementing a simple interface with a virtual onTimer() member function.
|
Boost::ASIO contains an asynchronous timer implementation. That might work for you.
|
127,290 | 128,221 |
Is it possible to subclass a C struct in C++ and use pointers to the struct in C code?
|
Is there a side effect in doing this:
C code:
struct foo {
int k;
};
int ret_foo(const struct foo* f){
return f.k;
}
C++ code:
class bar : public foo {
int my_bar() {
return ret_foo( (foo)this );
}
};
There's an extern "C" around the C++ code and each code is inside its own compilation unit.
Is this portable across compilers?
|
This is entirely legal. In C++, classes and structs are identical concepts, with the exception that all struct members are public by default. That's the only difference. So asking whether you can extend a struct is no different than asking if you can extend a class.
There is one caveat here. There is no guarantee of layout consistency from compiler to compiler. So if you compile your C code with a different compiler than your C++ code, you may run into problems related to member layout (padding especially). This can even occur when using C and C++ compilers from the same vendor.
I have had this happen with gcc and g++. I worked on a project which used several large structs. Unfortunately, g++ packed the structs significantly looser than gcc, which caused significant problems sharing objects between C and C++ code. We eventually had to manually set packing and insert padding to make the C and C++ code treat the structs the same. Note however, that this problem can occur regardless of subclassing. In fact we weren't subclassing the C struct in this case.
|
127,426 | 127,516 |
GNU compiler warning "class has virtual functions but non-virtual destructor"
|
I have defined an interface in C++, i.e. a class containing only pure virtual functions.
I want to explicitly forbid users of the interface to delete the object through a pointer to the interface, so I declared a protected and non-virtual destructor for the interface, something like:
class ITest{
public:
virtual void doSomething() = 0;
protected:
~ITest(){}
};
void someFunction(ITest * test){
test->doSomething(); // ok
// deleting object is not allowed
// delete test;
}
The GNU compiler gives me a warning saying:
class 'ITest' has virtual functions but non-virtual destructor
Once the destructor is protected, what is the difference in having it virtual or non-virtual?
Do you think this warning can be safely ignored or silenced?
|
It's more or less a bug in the compiler. Note that in more recent versions of the compiler this warning does not get thrown (at least in 4.3 it doesn't). Having the destructor be protected and non-virtual is completely legitimate in your case.
See here for an excellent article by Herb Sutter on the subject. From the article:
Guideline #4: A base class destructor should be either public and virtual, or protected and nonvirtual.
|
127,514 | 127,520 |
Resizing Controls in MFC
|
I am writing a program which has two panes (via CSplitter), however I am having problems figuring out out to resize the controls in each frame. For simplicity, can someone tell me how I would do it for a basic frame with a single CEdit control?
I'm fairly sure it is to do with the CEdit::OnSize() function... But I'm not really getting anywhere...
Thanks! :)
|
When your frame receives an OnSize message it will give you the new width and height - you can simply call the CEdit SetWindowPos method passing it these values.
Assume CMyPane is your splitter pane and it contains a CEdit you created in OnCreate called m_wndEdit:
void CMyPane::OnSize(UINT nType, int cx, int cy)
{
m_wndEdit.SetWindowPos(NULL, 0, 0, cx, cy, SWP_NOMOVE | SWP_NOACTIVATE | SWP_NOZORDER);
}
|
127,608 | 276,671 |
What are the access restrictions on accessing a DSN
|
We are running part of our app as a windows service and it needs to b able to access DSNs in order to import through ODBC. However there seem to be a lot of restrictions found through trial and error on what DSNs it can access. For example it seems that it cannot
1. access a system DSN unless the account that is running the service has admin privileges. (I get an Access Denied error, when trying to connect)
2. access a user DSN that was created by a different user (this one is understandable).
3. access a file DSN across the network
I've read that the purpose of a file DSN is to allow other computers to use it to connect, however i can't seem to make that work.
So does any know, or know where i can find out what all the rules and restrictions on accessing a DSN are when using a windows service.
thanks
|
This is somewhere between your #1 and #2: sometimes correct file permissions are also necessary. I once had troubles on a Vista machine connecting to a DB2 DSN because, for whatever reason (maybe to write out temp files; although I don't know why it would do such a thing in this location instead of a user-specific one), the driver needed write access to the directory where IBM had installed the client binaries and libs, which had been done by an Administrator and was in the root of the C drive.
|
129,919 | 130,025 |
Populating a database with file names from directories
|
I have an application which behaves as a slideshow for all pictures in a folder. It is written in Borland's C++ Builder (9). It currently uses some borrowed code to throw the filenames into a listbox and save the listbox items as a text file.
I want to update this so that the filenames are stored in a proper database so that I can include extra fields and do proper SQL things with it.
So basically I would be able to work it out if I saw some 'sample' code doing the same thing.
So if anyone knows of any code that does this I would be greatful. It needs to be able to do it on certain file types... not just all the files.
|
You basically neeed to write a recursive function with a TDataSet parameter.
(I could not compile my code, so you get it "as is")
void AddFiles(AnsiString path, TDataSet *DataSet)
{
TSearchRec sr;
int f;
f = FindFirst(path+"\\*.*", faAnyFile, sr);
while( !f )
{
if(sr.Attr & faDirectory)
{
if(sr.Name != "." && sr.Name != "..")
{
path.sprintf("%s%s%s", path, "\\", sr.Name);
AddFiles(path, DataSet);
}
}
else
{
DataSet->Append();
DataSet->FieldByName("Name")->Value = sr.Name;
/* other fields ... */
DataSet->Post();
}
f = FindNext(sr);
}
FindClose(sr);
}
|
130,117 | 130,123 |
If you shouldn't throw exceptions in a destructor, how do you handle errors in it?
|
Most people say never throw an exception out of a destructor - doing so results in undefined behavior. Stroustrup makes the point that "the vector destructor explicitly invokes the destructor for every element. This implies that if an element destructor throws, the vector destruction fails... There is really no good way to protect against exceptions thrown from destructors, so the library makes no guarantees if an element destructor throws" (from Appendix E3.2).
This article seems to say otherwise - that throwing destructors are more or less okay.
So my question is this - if throwing from a destructor results in undefined behavior, how do you handle errors that occur during a destructor?
If an error occurs during a cleanup operation, do you just ignore it? If it is an error that can potentially be handled up the stack but not right in the destructor, doesn't it make sense to throw an exception out of the destructor?
Obviously these kinds of errors are rare, but possible.
|
Throwing an exception out of a destructor is dangerous.
If another exception is already propagating the application will terminate.
#include <iostream>
class Bad
{
public:
// Added the noexcept(false) so the code keeps its original meaning.
// Post C++11 destructors are by default `noexcept(true)` and
// this will (by default) call terminate if an exception is
// escapes the destructor.
//
// But this example is designed to show that terminate is called
// if two exceptions are propagating at the same time.
~Bad() noexcept(false)
{
throw 1;
}
};
class Bad2
{
public:
~Bad2()
{
throw 1;
}
};
int main(int argc, char* argv[])
{
try
{
Bad bad;
}
catch(...)
{
std::cout << "Print This\n";
}
try
{
if (argc > 3)
{
Bad bad; // This destructor will throw an exception that escapes (see above)
throw 2; // But having two exceptions propagating at the
// same time causes terminate to be called.
}
else
{
Bad2 bad; // The exception in this destructor will
// cause terminate to be called.
}
}
catch(...)
{
std::cout << "Never print this\n";
}
}
This basically boils down to:
Anything dangerous (i.e. that could throw an exception) should be done via public methods (not necessarily directly). The user of your class can then potentially handle these situations by using the public methods and catching any potential exceptions.
The destructor will then finish off the object by calling these methods (if the user did not do so explicitly), but any exceptions throw are caught and dropped (after attempting to fix the problem).
So in effect you pass the responsibility onto the user. If the user is in a position to correct exceptions they will manually call the appropriate functions and processes any errors. If the user of the object is not worried (as the object will be destroyed) then the destructor is left to take care of business.
An example:
std::fstream
The close() method can potentially throw an exception.
The destructor calls close() if the file has been opened but makes sure that any exceptions do not propagate out of the destructor.
So if the user of a file object wants to do special handling for problems associated to closing the file they will manually call close() and handle any exceptions. If on the other hand they do not care then the destructor will be left to handle the situation.
Scott Myers has an excellent article about the subject in his book "Effective C++"
Edit:
Apparently also in "More Effective C++"
Item 11: Prevent exceptions from leaving destructors
|
130,237 | 130,255 |
How to package a Linux binary for my Open Source application?
|
I have an Open Source app and I currently only post the binary for the Windows build. At this point Linux users have to get the source and compile it. Is there a standard way for posting a Linux binary?
My app is in c / c++ and compiled with gcc, the only external Linux code I use is X Windows and CUPS.
|
The most common way would be to package it in a .rpm file for RedHat-based distros like Fedora, or a .deb file for Debian-based distros like Ubuntu.
|
130,322 | 130,528 |
How do you pass a member function pointer?
|
I am trying to pass a member function within a class to a function that takes a member function class pointer. The problem I am having is that I am not sure how to properly do this within the class using the this pointer. Does anyone have suggestions?
Here is a copy of the class that is passing the member function:
class testMenu : public MenuScreen{
public:
bool draw;
MenuButton<testMenu> x;
testMenu():MenuScreen("testMenu"){
x.SetButton(100,100,TEXT("buttonNormal.png"),TEXT("buttonHover.png"),TEXT("buttonPressed.png"),100,40,&this->test2);
draw = false;
}
void test2(){
draw = true;
}
};
The function x.SetButton(...) is contained in another class, where "object" is a template.
void SetButton(int xPos, int yPos, LPCWSTR normalFilePath, LPCWSTR hoverFilePath, LPCWSTR pressedFilePath, int Width, int Height, void (object::*ButtonFunc)()) {
BUTTON::SetButton(xPos, yPos, normalFilePath, hoverFilePath, pressedFilePath, Width, Height);
this->ButtonFunc = &ButtonFunc;
}
If anyone has any advice on how I can properly send this function so that I can use it later.
|
To call a member function by pointer, you need two things: A pointer to the object and a pointer to the function. You need both in MenuButton::SetButton()
template <class object>
void MenuButton::SetButton(int xPos, int yPos, LPCWSTR normalFilePath,
LPCWSTR hoverFilePath, LPCWSTR pressedFilePath,
int Width, int Height, object *ButtonObj, void (object::*ButtonFunc)())
{
BUTTON::SetButton(xPos, yPos, normalFilePath, hoverFilePath, pressedFilePath, Width, Height);
this->ButtonObj = ButtonObj;
this->ButtonFunc = ButtonFunc;
}
Then you can invoke the function using both pointers:
((ButtonObj)->*(ButtonFunc))();
Don't forget to pass the pointer to your object to MenuButton::SetButton():
testMenu::testMenu()
:MenuScreen("testMenu")
{
x.SetButton(100,100,TEXT("buttonNormal.png"), TEXT("buttonHover.png"),
TEXT("buttonPressed.png"), 100, 40, this, test2);
draw = false;
}
|
130,664 | 130,743 |
Combining two executables
|
I have a command line executable that alters some bits in a file that i want to use from my program.
Is it possible to create my own executable that uses this tool and distribute only one executable?
[edit] Clarification:
The command line tool takes an offset and some bits and changes the bits at this offset in a given file. So I want to create a patcher for an application that changes specific bits to a specific value, so what I can do i write something like a batch file to do it but i want to create an executable that does it, i.e. embed the tool into a wrapper program that calls it with specific values.
I can code wrapper in (windows) c\c++, asm but no .net please.
|
It would be easier to roll your own implementation of this program than to write the wrapper; it sounds like it is trivial -- just open the file, seek to the right location, write your bits, close the file, you're done.
|
130,800 | 130,885 |
Data visualization in desktop applications
|
I would like to create data visualizations in desktop apps, using frameworks, languages and libraries that help with this kind of task. Visualizations should be interactive: clickable, draggable, customizable, animated...
What I would like to create is something similar to the examples seen here: http://www.visualcomplexity.com/vc/
These are the links I already know: http://delicious.com/laura_laura/visualization?setcount=100
The preferred language is C++/Visual C++ (MFC) because I'm familiar with it, but any other technology is welcome, I would like to make a list from "as similar as possible" to Visual C++ to "very different" from Visual C++.
WPF, flex, Adobe Air, flare, JavaScript (running in a browser as client-side apps with access to local files or as desktop apps) are possibilities, post any good links to examples, tutorials, how-tos, etc. that you know of.
What are the learning curves and complexity for the different options? Which one would you choose and why? Which one have you already worked with and how was your experience? How would you start with a project of this characteristics?
|
Your post has far too many questions in it to be answered easily in one response, so you might try re-posting with specific questions. Data visualization is a HUGE area of study and it's not significantly different for web applications versus desktop applications.
Put simply, it's the display of data in a way that visually tells the story of the data. It's most useful in cases where the volume of data is such that tabular display isn't effective. Trends, outliers, and abnormal occurrences can sometimes only be seen when data is represented visually. Visually can be a simple chart, or it can be more advanced visualizations such as treemaps or thematic mapping/GIS presentations.
If it's an area you're interested in studying, look into:
Edward Tufte - Author, professor, and all-around guru for the display of information
Many Eyes - from IBM AlphaWorks
Processing - A visual "sketching" language based on Java
Visualizing Data - An O'Reilly book by Ben Fry, one of the co-creators of Processing
Beyond that, I think specifics would depend on what you want to accomplish -- what data is being analyzed, who the audience is, and what the desired "message" is.
|
130,913 | 130,926 |
What is the state of C++ refactor support in Eclipse?
|
Is it at the state where it is actually useful and can do more than rename classes?
|
CDT (C/C++ Development Tools - eclipse project) 5.0 has a bunch of new refactorings
* Declare Method
* Extract Baseclass
* Extract Constant
* Extract Method
* Extract Subclass
* Hide Method
* Implement Method
* Move Field / Method
* Replace Number
* Separate Class
* Generate Getters and Setters
There is a CDT refactoring wiki
|
131,241 | 131,271 |
Why use iterators instead of array indices?
|
Take the following two lines of code:
for (int i = 0; i < some_vector.size(); i++)
{
//do stuff
}
And this:
for (some_iterator = some_vector.begin(); some_iterator != some_vector.end();
some_iterator++)
{
//do stuff
}
I'm told that the second way is preferred. Why exactly is this?
|
The first form is efficient only if vector.size() is a fast operation. This is true for vectors, but not for lists, for example. Also, what are you planning to do within the body of the loop? If you plan on accessing the elements as in
T elem = some_vector[i];
then you're making the assumption that the container has operator[](std::size_t) defined. Again, this is true for vector but not for other containers.
The use of iterators bring you closer to container independence. You're not making assumptions about random-access ability or fast size() operation, only that the container has iterator capabilities.
You could enhance your code further by using standard algorithms. Depending on what it is you're trying to achieve, you may elect to use std::for_each(), std::transform() and so on. By using a standard algorithm rather than an explicit loop you're avoiding re-inventing the wheel. Your code is likely to be more efficient (given the right algorithm is chosen), correct and reusable.
|
131,445 | 131,689 |
C++ tr1 on GCC 3.4.4 (for the Nokia N810 tablet computer)
|
What does it take to get C++ tr1 members (shared_ptr especially, but we'd like function and bind and ALL the others) working with GCC 3.4.4 (for the Nokia N810 tablet computer).
Has anyone done this? Attempted this?
It may not be feasible for us to upgrade to GCC 4.x to cross-compile for this device (but if you've done that, we'd love to know).
There may be many approaches, and I'd like to avoid dead ends others have hit.
We're trying to avoid bringing in boost, since it can be pretty interdependent (you bring in one boost header and you end up with 20 more), and keeping code size down is important to us.
Thank you!
|
For shared_ptr from boost, at least, the number of dependencies is pretty small. You can use bcp to extract subsets from boost as well, so you don't need the whole of boost to compile your application.
|
131,803 | 131,833 |
unsigned int vs. size_t
|
I notice that modern C and C++ code seems to use size_t instead of int/unsigned int pretty much everywhere - from parameters for C string functions to the STL. I am curious as to the reason for this and the benefits it brings.
|
The size_t type is the unsigned integer type that is the result of the sizeof operator (and the offsetof operator), so it is guaranteed to be big enough to contain the size of the biggest object your system can handle (e.g., a static array of 8Gb).
The size_t type may be bigger than, equal to, or smaller than an unsigned int, and your compiler might make assumptions about it for optimization.
You may find more precise information in the C99 standard, section 7.17, a draft of which is available on the Internet in pdf format, or in the C11 standard, section 7.19, also available as a pdf draft.
|
132,039 | 132,119 |
How to send Email through a C++ program?
|
How can I send an email from C++? Is there a good cross-platform (MS Windows, Linux etc) library that I can use? I'm using GCC (cygwin on MS Windows).
|
Check out jwSMTP - a cross-platform SMTP class.
http://johnwiggins.net/jwsmtp/
|
132,116 | 132,148 |
Heisenbug: WinApi program crashes on some computers
|
Please help! I'm really at my wits' end.
My program is a little personal notes manager (google for "cintanotes").
On some computers (and of course I own none of them) it crashes with an unhandled exception just after start.
Nothing special about these computers could be said, except that they tend to have AMD CPUs.
Environment: Windows XP, Visual C++ 2005/2008, raw WinApi.
Here is what is certain about this "Heisenbug":
1) The crash happens only in the Release version.
2) The crash goes away as soon as I remove all GDI-related stuff.
3) BoundChecker has no complains.
4) Writing a log shows that the crash happens on a declaration of a local int variable! How could that be? Memory corruption?
Any ideas would be greatly appreciated!
UPDATE: I've managed to get the app debugged on a "faulty" PC. The results:
"Unhandled exception at 0x0044a26a in CintaNotes.exe: 0xC000001D: Illegal Instruction."
and code breaks on
0044A26A cvtsi2sd xmm1,dword ptr [esp+14h]
So it seems that the problem was in the "Code Generation/Enable Enhanced Instruction Set" compiler option. It was set to "/arch:SSE2" and was crashing on the machines that didn't support SSE2. I've set this option to "Not Set" and the bug is gone. Phew!
Thank you all very much for help!!
|
So it doesnnt crash when configuration is DEBUG Configuration? There are many things different than a RELEASE configruation:
1.) Initialization of globals
2.) Actual machine Code generated etc..
So first step is find out what are exact settings for each parameter in the RELEASE mode as compared to the DEBUG mode.
-AD
|
132,242 | 132,308 |
Does a memory leak at unload of a DLL cause a leak in the host process?
|
Consider this case:
dll = LoadDLL()
dll->do()
...
void do() {
char *a = malloc(1024);
}
...
UnloadDLL(dll);
At this point, will the 1k allocated in the call to malloc() be available to the host process again?
The DLL is statically linking to the CRT.
|
No, you do not leak.
If you mix dll models (static, dynamic) then you can end up with a memory error if you allocate memory in a dll, that you free in a different one (or freed in the exe)
This means that the heap created by the statically-linked CRT is not the same heap as a different dll's CRT.
If you'd linked with the dynamic version of the CRT, then you'd have a leak as the heap is shared amongst all dynamically-linked CRTs. It means you should always design your apps to use the dynamic CRTs, or ensure you never manage memory across a dll boundary (ie if you allocate memory in a dll, always provide a routine to free it in the same dll)
|
132,323 | 132,423 |
Obtain an index into a vector using Iterators
|
When iterating over elements of a vector it is preferred to use iterators instead of an index (see Why use iterators instead of array indices?).
std::vector<T> vec;
std::vector<T>::iterator it;
for ( it = vec.begin(); it != vec.end(); ++it )
{
// do work
}
However, it can be necessary to use the index in the body of the loop. Which of the following would be preferable in that case, considering performance and flexibility/extensibility?
Revert to the indexed loop
std::vector vec;
size_t i;
for ( i = 0; i < vec.size(); ++i )
{
// use i
}
Calculate offset
std::vector vec;
std::vector::iterator it;
for ( it = vec.begin(); it != vec.end(); ++it )
{
size_t i = it - vec.begin();
// use i
}
Use std::distance
std::vector vec;
std::vector::iterator it;
for ( it = vec.begin(); it != vec.end(); ++it )
{
size_t i = std::distance( vec.begin(), it );
// use i
}
|
If you're planning on using exclusively a vector, you may want to switch back to the indexed loop, since it conveys your intent more clearly than iterator-loop. However, if evolution of your program in the future may lead to a change of container, you should stick to the iterators and use std::distance, which is guaranteed to work with all standard iterators.
|
132,358 | 132,394 |
How to read file content into istringstream?
|
In order to improve performance reading from a file, I'm trying to read the entire content of a big (several MB) file into memory and then use a istringstream to access the information.
My question is, which is the best way to read this information and "import it" into the string stream? A problem with this approach (see bellow) is that when creating the string stream the buffers gets copied, and memory usage doubles.
#include <fstream>
#include <sstream>
using namespace std;
int main() {
ifstream is;
is.open (sFilename.c_str(), ios::binary );
// get length of file:
is.seekg (0, std::ios::end);
long length = is.tellg();
is.seekg (0, std::ios::beg);
// allocate memory:
char *buffer = new char [length];
// read data as a block:
is.read (buffer,length);
// create string stream of memory contents
// NOTE: this ends up copying the buffer!!!
istringstream iss( string( buffer ) );
// delete temporary buffer
delete [] buffer;
// close filestream
is.close();
/* ==================================
* Use iss to access data
*/
}
|
std::ifstream has a method rdbuf(), that returns a pointer to a filebuf. You can then "push" this filebuf into your stringstream:
#include <fstream>
#include <sstream>
int main()
{
std::ifstream file( "myFile" );
if ( file )
{
std::stringstream buffer;
buffer << file.rdbuf();
file.close();
// operations on the buffer...
}
}
EDIT: As Martin York remarks in the comments, this might not be the fastest solution since the stringstream's operator<< will read the filebuf character by character. You might want to check his answer, where he uses the ifstream's read method as you used to do, and then set the stringstream buffer to point to the previously allocated memory.
|
132,612 | 132,917 |
Show a ContextMenuStrip without it showing in the taskbar
|
I have found that when I execute the show() method for a contextmenustrip (a right click menu), if the position is outside that of the form it belongs to, it shows up on the taskbar also.
I am trying to create a right click menu for when clicking on the notifyicon, but as the menu hovers above the system tray and not inside the form (as the form can be minimised when right clicking) it shows up on the task bar for some odd reason
Here is my code currently:
private: System::Void notifyIcon1_MouseClick(System::Object^ sender, System::Windows::Forms::MouseEventArgs^ e) {
if(e->Button == System::Windows::Forms::MouseButtons::Right) {
this->sysTrayMenu->Show(Cursor->Position);
}
}
What other options do I need to set so it doesn't show up a blank process on the task bar.
|
Try assigning your menu to the ContextMenuStrip property of NotifyIcon rather than showing it in the mouse click handler.
|
132,667 | 132,730 |
How can I disable #pragma warnings?
|
While developing a C++ application, I had to use a third-party library which produced a huge amount of warnings related with a harmless #pragma directive being used.
../File.hpp:1: warning: ignoring #pragma ident
In file included from ../File2.hpp:47,
from ../File3.hpp:57,
from File4.h:49,
Is it possible to disable this kind of warnings, when using the GNU C++ compiler?
|
I believe you can compile with
-Wno-unknown-pragmas
to suppress these.
|
132,738 | 132,915 |
Why should I ever use inline code?
|
I'm a C/C++ developer, and here are a couple of questions that always baffled me.
Is there a big difference between "regular" code and inline code?
Which is the main difference?
Is inline code simply a "form" of macros?
What kind of tradeoff must be done when choosing to inline your code?
Thanks
|
Is there a big difference between "regular" code and inline code?
Yes and no. No, because an inline function or method has exactly the same characteristics as a regular one, most important one being that they are both type safe. And yes, because the assembly code generated by the compiler will be different; with a regular function, each call will be translated into several steps: pushing parameters on the stack, making the jump to the function, popping the parameters, etc, whereas a call to an inline function will be replaced by its actual code, like a macro.
Is inline code simply a "form" of macros?
No! A macro is simple text replacement, which can lead to severe errors. Consider the following code:
#define unsafe(i) ( (i) >= 0 ? (i) : -(i) )
[...]
unsafe(x++); // x is incremented twice!
unsafe(f()); // f() is called twice!
[...]
Using an inline function, you're sure that parameters will be evaluated before the function is actually performed. They will also be type checked, and eventually converted to match the formal parameters types.
What kind of tradeoff must be done when choosing to inline your code?
Normally, program execution should be faster when using inline functions, but with a bigger binary code. For more information, you should read GoTW#33.
|
133,364 | 133,468 |
How do you handle strings in C++?
|
Which is your favorite way to go with strings in C++? A C-style array of chars? Or wchar_t? CString, std::basic_string, std::string, BSTR or CComBSTR?
Certainly each of these has its own area of application, but anyway, which is your favorite and why?
|
std::string or std::wstring, depending on your needs. Why?
They're standard
They're portable
They can handle I18N
They have performance guarantees (as per the standard)
Protected against buffer overflows and similar attacks
Are easily converted to other types as needed
Are nicely templated, giving you a wide variety of options while reducing code bloat and improving performance. Really. Compilers that can't handle templates are long gone now.
A C-style array of chars is just asking for trouble. You'll still need to deal with them on occasion (and that's what std::string.c_str() is for), but, honestly -- one of the biggest dangers in C is programmers doing Bad Things with char* and winding up with buffer overflows. Just don't do it.
An array of wchar__t is the same thing, just bigger.
CString, BSTR, and CComBSTR are not standard and not portable. Avoid them unless absolutely forced. Optimally, just convert a std::string/std::wstring to them when needed, which shouldn't be very expensive.
Note that std::string is just a child of std::basic_string, but you're still better off using std::string unless you have a really good reason not to. Really Good. Let the compiler take care of the optimization in this situation.
|
133,679 | 3,573,305 |
Determine SLOC and complexity of C# and C++ from .NET
|
I have been using SourceMonitor on my project for a couple of years to keep records of source-code complexity and basic SLOC (including comments) for C# and C++ components. These are used for external reporting to our customer, so I'm not in a position to argue their merits or lack of.
I've been working on a repository analysis tool which is able to give me a snap-shot view of the project at any date/time. The next stage I want to add is caching of the metrics for a specified file and revision.
I know SourceMonitor can be scripted to allow me to supply the files to be tested and grab the metrics out of the result file CSV or XML.
Is there a native library in .NET that I could use to do the same thing -- e.g. avoid spawning an external process and parsing the results.
I only really need the following metrics:
SLOC
Number of comment lines
Complexity of most complex method
Name of most complex method
I need to run this on C# code and normal C++ code.
Edit: since I already have tool which provides the GUI and reports I want, the metrics need to be scripted or generated using a library/API without manual steps. Ideally I want to get metrics for a specified file/revision (rather than a whole project) which my utility will drag from version-control automatically.
NOTE: I created a bounty for this and was on holiday when it expired... the NDepends answer does NOT satisfy me as it doesn't look at source-code but the assembly itself!!!
|
Whilst I never did find a .NET product that can equally parse C# and C++, I did manage to find an easy-to-use product, CODECOUNT that supports those languages and many more.
It has a simple command line, unlike SourceMonitor that was being used on my project up until CODECOUNT replaced it.
|
133,837 | 135,367 |
c++ boost lambda libraries
|
What might be the best way to start programming using boost lambda libraries.
|
Remaining within the boundaries of the C++ language and libraries, I would suggest first getting used to programming using STL algorithm function templates, as one the most common use you will have for boost::lambda is to replace functor classes with inlined expressions inlined.
The library documentation itself gives you an up-front example of what it is there for:
for_each(a.begin(), a.end(), std::cout << _1 << ' ');
where std::cout << _1 << ' ' produces a function object that, when called, writes its first argument to the cout stream. This is something you could do with a custom functor class, std::ostream_iterator or an explicit loop, but boost::lambda wins in conciseness and probably clarity -- at least if you are used to the functional programming concepts.
When you (over-)use the STL, you find yourself gravitating towards boost::bind and boost::lambda. It comes in really handy for things like:
std::sort( c.begin(), c.end(), bind(&Foo::x, _1) < bind(&Foo::x, _2) );
Before you get to that point, not so much. So use STL algorithms, write your own functors and then translate them into inline expressions using boost::lambda.
From a professional standpoint, I believe the best way to get started with boost::lambda is to get usage of boost::bind understood and accepted. Use of placeholders in a boost::bind expression looks much less magical than "naked" boost::lambda placeholders and finds easier acceptance during code reviews. Going beyond basic boost::lambda use is quite likely to get you grief from your coworkers unless you are in a bleeding-edge C++ shop.
Try not to go overboard - there are times when and places where a for-loop really is the right solution.
|
133,948 | 136,390 |
Sharing GDI handles between processes in Windows CE 6.0
|
I know that GDI handles are unique and process specific in 'Big Windows' but do they work the same way in Windows CE 6.0?
For example:
I've got a font management service that several other services and applications will be using. This service has a list of valid fonts and configurations for printing and displaying; CreateFontIndirect() has been called on each of them. When one of these client applications requests a particular font (and configuration), can I return it the appropriate HFONT? If not, is there a safe/valid way to duplicate the handle, ala DuplicateHandle for Kernel handles.
The reason I ask, is that I've seen HFONTs passed to another application through PostMessage work correctly, but I didn't think they were 'supposed' to.
|
I believe you are correct, you cannot rely on HFONTs being safe to pass across processes.
'The reason I ask, is that I've seen HFONTs passed to another application through PostMessage work correctly, but I didn't think they were 'supposed' to.'
They were not passed correctly, so there is no 'supposed to'. While HFONTs are not guaranteed to work across processes, they're also not guaranteed to be unique across processes. 'Arial' may have the same HFONT value in two difference processes at a point in time with a particular version of each application, and could change at any moment (including half-way through using it!)
It's like if I'm painting, and run out of orange paint, which i keep as the 3rd tube on my easle. I could reach for your easle and grab the 3rd tupe... but i have no guarantee that it's orange... i have no guarantee that it even contains paint! Perhaps you were brushing your teeth at the easle today.. oops!
GDI handles are like the number '3' in that example. Today, GDI might keep the tubed in the same order on all easles. It might keep some of them in order, some not (ie, orange 'sorta works', but 'seafoam green' is busted). They could be in order on one CE device, but not on another.
As always, YMMV.
|
134,371 | 134,445 |
How Do You Write Code That Is Safe for UTF-8?
|
We have a set of applications that were developed for the ASCII character set. Now, we're trying to install it in Iceland, and are running into problems where the Icelandic characters are getting screwed up.
We are working through our issues, but I was wondering: Is there a good "guide" out there for writing C++ code that is designed for 8-bit characters and which will work properly when UTF-8 data is given to it?
I can't expect everyone to read the whole Unicode standard, but if there is something more digestible available, I'd like to share it with the team so we don't run into these issues again.
Re-writing all the applications to use wchar_t or some other string representation is not feasible at this time. I'll also note that these applications communicate over networks with servers and devices that use 8-bit characters, so even if we did Unicode internally, we'd still have issues with translation at the boundaries. For the most part, these applications just pass data around; they don't "process" the text in any way other than copying it from place to place.
The operating systems used are Windows and Linux. We use std::string and plain-old C strings. (And don't ask me to defend any of the design decisions. I'm just trying to help fix the mess.)
Here is a list of what has been suggested:
The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)
UTF-8 and Unicode FAQ for Unix/Linux
The Unicode HOWTO
|
This looks like a comprehensive quick guide:
http://www.cl.cam.ac.uk/~mgk25/unicode.html
|
134,526 | 134,606 |
How do I know I reached a file's maximum size when using ofstream?
|
While writing a file using ofstream, how do I know when the file's size has reached the OS' maximum file size - more specifically linux's maximum file size - ??
|
First off, maximum file size is a filesystem limit, not an Operating System limit. It will even vary for a particular filesystem, based on how the filesystem was formatted.
As for how you'd figure out that you'd reached the limit, your code will likely throw an exception when that happens, which you'll then be able to relate back to the OS error codes.
Note that the actual limit is pretty darn big for a "typical" EXT2 filesystem - in the Terabytes. You'll likely never reach it, in practice. If you seriously are accumulating Terabytes of data, you might want to consider whether there's a more reasonable way to store it, rather than a single gigantic file.
|
134,569 | 134,640 |
c++ exception : throwing std::string
|
I would like to throw an exception when my C++ methods encounter something weird and can't recover. Is it OK to throw a std::string pointer?
Here's what I was looking forward to doing:
void Foo::Bar() {
if(!QueryPerformanceTimer(&m_baz)) {
throw new std::string("it's the end of the world!");
}
}
void Foo::Caller() {
try {
this->Bar(); // should throw
}
catch(std::string *caught) { // not quite sure the syntax is OK here...
std::cout << "Got " << caught << std::endl;
}
}
|
Yes. std::exception is the base exception class in the C++ standard library. You may want to avoid using strings as exception classes because they themselves can throw an exception during use. If that happens, then where will you be?
boost has an excellent document on good style for exceptions and error handling. It's worth a read.
|
134,731 | 134,777 |
Returning a const reference to an object instead of a copy
|
Whilst refactoring some code I came across some getter methods that returns a std::string. Something like this for example:
class foo
{
private:
std::string name_;
public:
std::string name()
{
return name_;
}
};
Surely the getter would be better returning a const std::string&? The current method is returning a copy which isn't as efficient. Would returning a const reference instead cause any problems?
|
The only way this can cause a problem is if the caller stores the reference, rather than copy the string, and tries to use it after the object is destroyed. Like this:
foo *pFoo = new foo;
const std::string &myName = pFoo->getName();
delete pFoo;
cout << myName; // error! dangling reference
However, since your existing function returns a copy, then you would
not break any of the existing code.
Edit: Modern C++ (i. e. C++11 and up) supports Return Value Optimization, so returning things by value is no longer frowned upon. One should still be mindful of returning extremely large objects by value, but in most cases it should be ok.
|
134,796 | 1,085,245 |
Automatically stop Visual C++ 2008 build at first compile error?
|
I know I can compile individual source files, but sometimes -- say, when editing a header file used by many .cpp files -- multiple source files need to be recompiled. That's what Build is for.
Default behavior of the "Build" command in VC9 (Visual C++ 2008) is to attempt to compile all files that need it. Sometimes this just results in many failed compiles. I usually just watch for errors and hit ctrl-break to stop the build manually.
Is there a way to configure it such the build stops at the very first compile error (not the first failed project build) automatically?
|
I came up with a better macro guys. It stops immediately after the first error/s (soon as build window is updated).
Visual Studio -> Tools -> Macros -> Macro IDE... (or ALT+F11)
Private Sub OutputWindowEvents_OnPaneUpdated(ByVal pPane As OutputWindowPane) Handles OutputWindowEvents.PaneUpdated
If Not (pPane.Name = "Build") Then Exit Sub
pPane.TextDocument.Selection.SelectAll()
Dim Context As String = pPane.TextDocument.Selection.Text
pPane.TextDocument.Selection.EndOfDocument()
Dim found As Integer = Context.IndexOf(": error ")
If found > 0 Then
DTE.ExecuteCommand("Build.Cancel")
End If
End Sub
Hope it works out for you guys.
|
135,069 | 135,454 |
#ifdef vs #if - which is better/safer as a method for enabling/disabling compilation of particular sections of code?
|
This may be a matter of style, but there's a bit of a divide in our dev team and I wondered if anyone else had any ideas on the matter...
Basically, we have some debug print statements which we turn off during normal development. Personally I prefer to do the following:
//---- SomeSourceFile.cpp ----
#define DEBUG_ENABLED (0)
...
SomeFunction()
{
int someVariable = 5;
#if(DEBUG_ENABLED)
printf("Debugging: someVariable == %d", someVariable);
#endif
}
Some of the team prefer the following though:
// #define DEBUG_ENABLED
...
SomeFunction()
{
int someVariable = 5;
#ifdef DEBUG_ENABLED
printf("Debugging: someVariable == %d", someVariable);
#endif
}
...which of those methods sounds better to you and why? My feeling is that the first is safer because there is always something defined and there's no danger it could destroy other defines elsewhere.
|
My initial reaction was #ifdef, of course, but I think #if actually has some significant advantages for this - here's why:
First, you can use DEBUG_ENABLED in preprocessor and compiled tests. Example - Often, I want longer timeouts when debug is enabled, so using #if, I can write this
DoSomethingSlowWithTimeout(DEBUG_ENABLED? 5000 : 1000);
... instead of ...
#ifdef DEBUG_MODE
DoSomethingSlowWithTimeout(5000);
#else
DoSomethingSlowWithTimeout(1000);
#endif
Second, you're in a better position if you want to migrate from a #define to a global constant. #defines are usually frowned on by most C++ programmers.
And, Third, you say you've a divide in your team. My guess is this means different members have already adopted different approaches, and you need to standardise. Ruling that #if is the preferred choice means that code using #ifdef will compile -and run- even when DEBUG_ENABLED is false. And it's much easier to track down and remove debug output that is produced when it shouldn't be than vice-versa.
Oh, and a minor readability point. You should be able to use true/false rather than 0/1 in your #define, and because the value is a single lexical token, it's the one time you don't need parentheses around it.
#define DEBUG_ENABLED true
instead of
#define DEBUG_ENABLED (1)
|
135,112 | 137,838 |
Java Developer meets Objective-C on Mac OS
|
I have developed in C++ many years ago, but these days I am primarily a Java software engineer. Given I own an iPhone, am ready to spring for a MacBook next month, and am generally interested in getting started with Mac OS developmentmt (using Objective C), I thought I would just put this question out there: What Next?
More specifically, what books should I pick up, and are there any web resources that some folks could point me to? Some books that I am planning to purchase:
Programming in Objective-C 2.0
Cocoa(R) Programming for Mac OS X (3rd Edition)
Anyone familiar with these titles? Finally, I would be very interested in a summary of what I should be prepared to expect, once I embark on this journey. As someone that develops in Java using IntelliJ IDEA, what are some key differences I will notice as I move over to writing ObjectiveC code in Xcode? What's the differences between Mac OS desktop development and iPhone development? Being used to Java garbage collection, what should I know about ObjectiveC garbage collection / memory management. Any other language specific issues that anyone would like to point out? How about building UIs? Is it closer to Swing, building Visual C++ resource files that code interacts with, or is it more like some of the borland IDEs that will generate code for guis?
|
Having purchased both of the books in your question, I recommend Cocoa Programming for Mac OS X as a quick way to learn the language and the Cocoa framework, and is probably the fastest way to start producing real applications in Cocoa. I highly recommend it. Programming in Objective-C 2.0 is a great reference book, but if you already know C, there's no much it's going to teach you that you can't pick up from the other book. However, if you ever need to a list of all the reserved keywords in Objective-C, that's the book to go to.
All of the user interface can be generated progmatically, but you'll find it much easier to use Interface Builder, which comes with XCode, to lay out the user interface. You'll end up with a lot less code. With bindings, you can even eliminate code which isn't directly related to laying out the interface. The details are in the Cocoa Programming for Mac OS X book.
The one big thing I miss from Java is the collection API. In Cocoa, you just get NSSet, NSArray, and NSDictionary, and there's no analog to the Comparable interface. These classes are also immutable, but have mutable versions such as NSMutableArray.
I actually haven't played with the Garbage Collection in Objective-C 2.0. In previous versions of Objective-C, memory management was handled by the retain, release, and autorelease methods. Objects were created with a retain count of 1. Retaining incremented that count, releasing decremented it, and autoreleasing objects is a little more complicated. Again, the Cocoa Programming book explains it well. Garbage collection is an option, and if it's turned on, the retain, release and autorelease methods do nothing. However, if you are writing a library or framework to be used by others, you should program it as if garbage collection is turned off. That way applications can use it whether or not they have garbage collection turned on.
As for Web resources, http://cocoadevcentral.com/ is a great site with beginner tutorials. The CocoaDev Wiki at http://www.cocoadev.com/ contains detailed information on a lot of topics, and you can usually find some useful information and people on the cocoa-dev mailing list http://lists.apple.com/mailman/listinfo/cocoa-dev
iPhone development is a little different, and the details are restricted by an NDA. However, if you get approved by Apple to get access to the iPhone developer center, Apple has provided some great video overviews of the differences, which point you to the documentation you need to make the jump from Mac OS X to iPhone OS X programming.
|
135,129 | 135,152 |
Should one prefer STL algorithms over hand-rolled loops?
|
I seem to be seeing more 'for' loops over iterators in questions & answers here than I do for_each(), transform(), and the like. Scott Meyers suggests that stl algorithms are preferred, or at least he did in 2001. Of course, using them often means moving the loop body into a function or function object. Some may feel this is an unacceptable complication, while others may feel it better breaks down the problem.
So... should STL algorithms be preferred over hand-rolled loops?
|
It depends on:
Whether high-performance is required
The readability of the loop
Whether the algorithm is complex
If the loop isn't the bottleneck, and the algorithm is simple (like for_each), then for the current C++ standard, I'd prefer a hand-rolled loop for readability. (Locality of logic is key.)
However, now that C++0x/C++11 is supported by some major compilers, I'd say use STL algorithms because they now allow lambda expressions — and thus the locality of the logic.
|
135,634 | 135,684 |
Class design vs. IDE: Are nonmember nonfriend functions really worth it?
|
In the (otherwise) excellent book C++ Coding Standards, Item 44, titled "Prefer writing nonmember nonfriend functions", Sutter and Alexandrescu recommend that only functions that really need access to the members of a class be themselves members of that class. All other operations which can be written by using only member functions should not be part of the class. They should be nonmembers and nonfriends. The arguments are that:
It promotes encapsulation, because there is less code that needs access to the internals of a class.
It makes writing function templates easier, because you don't have to guess each time whether some function is a member or not.
It keeps the class small, which in turn makes it easier to test and maintain.
Although I see the value in these argument, I see a huge drawback: my IDE can't help me find these functions! Whenever I have an object of some kind, and I want to see what operations are available on it, I can't just type "pMysteriousObject->" and get a list of member functions anymore.
Keeping a clean design is in the end about making your programming life easier. But this would actually make mine much harder.
So I'm wondering if it's really worth the trouble. How do you deal with that?
|
I'm going to have to disagree with Sutter and Alexandrescu on this one. I think if the behavior of function foo() falls within the realm of class Bar's responsibilities, then foo() should be part of bar().
The fact that foo() doesn't need direct access to Bar's member data doesn't mean it isn't conceptually part of Bar. It can also mean that the code is well factored. It's not uncommon to have member functions which perform all their behavior via other member functions, and I don't see why it should be.
I fully agree that peripherally-related functions should not be part of the class, but if something is core to the class responsibilities, there's no reason it shouldn't be a member, regardless of whether it is directly mucking around with the member data.
As for these specific points:
It promotes encapsulation, because there is less code that needs access to the internals of a class.
Indeed, the fewer functions that directly access the internals, the better. That means that having member functions do as much as possible via other member functions is a good thing. Splitting well-factored functions out of the class just leaves you with a half-class, that requires a bunch of external functions to be useful. Pulling well-factored functions away from their classes also seems to discourage the writing of well-factored functions.
It makes writing function templates easier, because you don't have to guess each time whether some function is a member or not.
I don't understand this at all. If you pull a bunch of functions out of classes, you've thrust more responsibility onto function templates. They are forced to assume that even less functionality is provided by their class template arguments, unless we are going to assume that most functions pulled from their classes is going to be converted into a template (ugh).
It keeps the class small, which in turn makes it easier to test and maintain.
Um, sure. It also creates a lot of additional, external functions to test and maintain. I fail to see the value in this.
|
135,834 | 135,966 |
Python: SWIG vs ctypes
|
In python, under what circumstances is SWIG a better choice than ctypes for calling entry points in shared libraries? Let's assume you don't already have the SWIG interface file(s). What are the performance metrics of the two?
|
SWIG generates (rather ugly) C or C++ code. It is straightforward to use for simple functions (things that can be translated directly) and reasonably easy to use for more complex functions (such as functions with output parameters that need an extra translation step to represent in Python.) For more powerful interfacing you often need to write bits of C as part of the interface file. For anything but simple use you will need to know about CPython and how it represents objects -- not hard, but something to keep in mind.
ctypes allows you to directly access C functions, structures and other data, and load arbitrary shared libraries. You do not need to write any C for this, but you do need to understand how C works. It is, you could argue, the flip side of SWIG: it doesn't generate code and it doesn't require a compiler at runtime, but for anything but simple use it does require that you understand how things like C datatypes, casting, memory management and alignment work. You also need to manually or automatically translate C structs, unions and arrays into the equivalent ctypes datastructure, including the right memory layout.
It is likely that in pure execution, SWIG is faster than ctypes -- because the management around the actual work is done in C at compiletime rather than in Python at runtime. However, unless you interface a lot of different C functions but each only a few times, it's unlikely the overhead will be really noticeable.
In development time, ctypes has a much lower startup cost: you don't have to learn about interface files, you don't have to generate .c files and compile them, you don't have to check out and silence warnings. You can just jump in and start using a single C function with minimal effort, then expand it to more. And you get to test and try things out directly in the Python interpreter. Wrapping lots of code is somewhat tedious, although there are attempts to make that simpler (like ctypes-configure.)
SWIG, on the other hand, can be used to generate wrappers for multiple languages (barring language-specific details that need filling in, like the custom C code I mentioned above.) When wrapping lots and lots of code that SWIG can handle with little help, the code generation can also be a lot simpler to set up than the ctypes equivalents.
|
136,050 | 1,910,905 |
Prevent visual studio creating browse info (.ncb) files
|
Is there a way to prevent VS2008 creating browse info file files for C++ projects.
I rarely use the class browser and it isn't worth the time it takes to recreate it after every build, especially since it runs even if the build failed.
EDIT - it's also needed for go to declaration/definition
|
There is a registry key for this as well: [HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\9.0\Languages\Language Services\C/C++]
Intellisense ON
"IntellisenseOptions"=dword:00000000
Intellisense OFF
"IntellisenseOptions"=dword:00000007
Intellisense ON - NO Background UPDATE
"IntellisenseOptions"=dword:00000005
More flags are available and you can Control Intellisense through Macros as well.
ISENSE_NORMAL = 0 'normal (Intellisense On)
ISENSE_NOBG = &H1 'no bg parsing (Intellisense Updating Off - although NCB file will be opened r/w and repersisted at shutdown)
ISENSE_NOQUERY = &H2 'no queries (don't run any ISense queries)
ISENSE_NCBRO = &H4 'no saving of NCB (must be set before opening NCB, doesn't affect updating or queries, just persisting of NCB)
ISENSE_OFF = &H7
|
136,175 | 136,714 |
Fastest small datastore on Windows
|
My app keeps track of the state of about 1000 objects. Those objects are read from and written to a persistent store (serialized) in no particular order.
Right now the app uses the registry to store each object's state. This is nice because:
It is simple
It is very fast
Individual object's state can be read/written without needing to read some larger entity (like pulling out a snippet from a large XML file)
There is a decent editor (RegEdit) which allow easily manipulating individual items
Having said that, I'm wondering if there is a better way. SQLite seems like a possibility, but you don't have the same level of multiple-reader/multiple-writer that you get with the registry, and no simple way to edit existing entries.
Any better suggestions? A bunch of flat files?
|
If you do begin to experiment with SQLite, you should know that "out of the box" it might not seem as fast as you would like, but it can quickly be made to be much faster by applying some established optimization tips:
SQLite optimization
Depending on the size of the data and the amount of RAM available, one of the best performance gains will occur by setting sqlite to use an all-in-memory database rather than writing to disk.
For in-memory databases, pass NULL as the filename argument to sqlite3_open and make sure that TEMP_STORE is defined appropriately
On the other hand, if you tell sqlite to use the harddisk, then you will get a similar benefit to your current usage of RegEdit to manipulate the program's data "on the fly."
The way you could simulate your current RegEdit technique with sqlite would be to use the sqlite command-line tool to connect to the on-disk database. You can run UPDATE statements on the sql data from the command-line while your main program is running (and/or while it is paused in break mode).
|
136,548 | 136,695 |
Pro/con: Initializing a variable in a conditional statement
|
In C++ you can initialize a variable in an if statement, like so:
if (CThing* pThing = GetThing())
{
}
Why would one consider this bad or good style? What are the benefits and disadvantages?
Personally i like this style because it limits the scope of the pThing variable, so it can never be used accidentally when it is NULL. However, i don't like that you can't do this:
if (CThing* pThing = GetThing() && pThing->IsReallySomeThing())
{
}
If there's a way to make the above work, please post. But if that's just not possible, i'd still like to know why.
Question borrowed from here, similar topic but PHP.
|
The important thing is that a declaration in C++ is not an expression.
bool a = (CThing* pThing = GetThing()); // not legit!!
You can't do both a declaration and boolean logic in an if statement, C++ language spec specifically allows either an expression or a declaration.
if(A *a = new A)
{
// this is legit and a is scoped here
}
How can we know whether a is defined between one term and another in an expression?
if((A *a = new A) && a->test())
{
// was a really declared before a->test?
}
Bite the bullet and use an internal if. The scope rules are useful and your logic is explicit:
if (CThing* pThing = GetThing())
{
if(pThing->IsReallySomeThing())
{
}
}
|
136,703 | 137,117 |
Is there a one-liner to read in a file to a string in C++?
|
I need a quick easy way to get a string from a file in standard C++. I can write my own, but just want to know if there is already a standard way, in C++.
Equivalent of this if you know Cocoa:
NSString *string = [NSString stringWithContentsOfFile:file];
|
We can do it but it's a long line :
#include<fstream>
#include<iostream>
#include<iterator>
#include<string>
using namespace std;
int main()
{
// The one-liner
string fileContents(istreambuf_iterator<char>(ifstream("filename.txt")), istreambuf_iterator<char>());
// Check result
cout << fileContents;
}
Edited : use "istreambuf_iterator" instead of "istream_iterator"
|
136,880 | 136,917 |
Sell me on const correctness
|
So why exactly is it that it's always recommended to use const as often as possible? It seems to me that using const can be more of a pain than a help in C++. But then again, I'm coming at this from the python perspective: if you don't want something to be changed, don't change it. So with that said, here are a few questions:
It seems like every time I mark something as const, I get an error and have to change some other function somewhere to be const too. Then this causes me to have to change another function somewhere else. Is this something that just gets easier with experience?
Are the benefits of using const really enough to compensate for the trouble? If you don't intend on changing an object, why not just not write code that doesn't change it?
I should note that at this point in time, I'm most focused on the benefits of using const for correctness and maintainability purposes, although it is also nice to have an idea of the performance implications.
|
This is the definitive article on "const correctness": https://isocpp.org/wiki/faq/const-correctness.
In a nutshell, using const is good practice because...
It protects you from accidentally changing variables that aren't intended be changed,
It protects you from making accidental variable assignments, and
The compiler can optimize it. For instance, you are protected from
if( x = y ) // whoops, meant if( x == y )
At the same time, the compiler can generate more efficient code because it knows exactly what the state of the variable/function will be at all times. If you are writing tight C++ code, this is good.
You are correct in that it can be difficult to use const-correctness consistently, but the end code is more concise and safer to program with. When you do a lot of C++ development, the benefits of this quickly manifest.
|
136,946 | 136,954 |
Difference between Enum and Define Statements
|
What's the difference between using a define statement and an enum statement in C/C++ (and is there any difference when using them with either C or C++)?
For example, when should one use
enum {BUFFER = 1234};
over
#define BUFFER 1234
|
enum defines a syntactical element.
#define is a pre-preprocessor directive, executed before the compiler sees the code, and therefore is not a language element of C itself.
Generally enums are preferred as they are type-safe and more easily discoverable. Defines are harder to locate and can have complex behavior, for example one piece of code can redefine a #define made by another. This can be hard to track down.
|
137,038 | 137,074 |
How do you get assembler output from C/C++ source in GCC?
|
How does one do this?
If I want to analyze how something is getting compiled, how would I get the emitted assembly code?
|
Use the -S option to gcc (or g++), optionally with -fverbose-asm which works well at the default -O0 to attach C names to asm operands as comments. It works less well at any optimization level, which you normally want to use to get asm worth looking at.
gcc -S helloworld.c
This will run the preprocessor (cpp) over helloworld.c, perform the initial compilation and then stop before the assembler is run. For useful compiler options to use in that case, see How to remove "noise" from GCC/clang assembly output? (or just look at your code on Matt Godbolt's online Compiler Explorer which filters out directives and stuff, and has highlighting to match up source lines with asm using debug information.)
By default, this will output the file helloworld.s. The output file can be still be set by using the -o option, including -o - to write to standard output for pipe into less.
gcc -S -o my_asm_output.s helloworld.c
Of course, this only works if you have the original source.
An alternative if you only have the resultant object file is to use objdump, by setting the --disassemble option (or -d for the abbreviated form).
objdump -S --disassemble helloworld > helloworld.dump
-S interleaves source lines with normal disassembly output, so this option works best if debugging option is enabled for the object file (-g at compilation time) and the file hasn't been stripped.
Running file helloworld will give you some indication as to the level of detail that you will get by using objdump.
Other useful objdump options include -rwC (to show symbol relocations, disable line-wrapping of long machine code, and demangle C++ names). And if you don't like AT&T syntax for x86, -Mintel. See the man page.
So for example, objdump -drwC -Mintel -S foo.o | less.
-r is very important with a .o that only has 00 00 00 00 placeholders for symbol references, as opposed to a linked executable.
|
137,089 | 137,175 |
Boost::signal memory access error
|
I'm trying to use boost::signal to implement a callback mechanism, and I'm getting a memory access assert in the boost::signal code on even the most trivial usage of the library. I have simplified it down to this code:
#include <boost/signal.hpp>
typedef boost::signal<void (void)> Event;
int main(int argc, char* argv[])
{
Event e;
return 0;
}
Thanks!
Edit: This was Boost 1.36.0 compiled with Visual Studio 2008 w/ SP1. Boost::filesystem, like boost::signal also has a library that must be linked in, and it seems to work fine. All the other boost libraries I use are headers-only, I believe.
|
I've tested your code on my system, and it works fine. I think that there's a mismatch between your compiler, and the compiler that your Boost.Signals library is built on. Try to download the Boost source, and compile Boost.Signals using the same compiler as you use for building your code.
Just for my info, what compiler (and version) are you using?
|
137,258 | 137,264 |
std::map iteration - order differences between Debug and Release builds
|
Here's a common code pattern I have to work with:
class foo {
public:
void InitMap();
void InvokeMethodsInMap();
static void abcMethod();
static void defMethod();
private:
typedef std::map<const char*, pMethod> TMyMap;
TMyMap m_MyMap;
}
void
foo::InitMap()
{
m_MyMap["abc"] = &foo::abcMethod;
m_MyMap["def"] = &foo::defMethod;
}
void
foo::InvokeMethodsInMap()
{
for (TMyMap::const_iterator it = m_MyMap.begin();
it != m_MyMap.end(); it++)
{
(*it->second)(it->first);
}
}
However, I have found that the order that the map is processed in (within the for loop) can differ based upon whether the build configuration is Release or Debug. It seems that the compiler optimisation that occurs in Release builds affects this order.
I thought that by using begin() in the loop above, and incrementing the iterator after each method call, it would process the map in order of initialisation. However, I also remember reading that a map is implemented as a hash table, and order cannot be guaranteed.
This is particularly annoying, as most of the unit tests are run on a Debug build, and often strange order dependency bugs aren't found until the external QA team start testing (because they use a Release build).
Can anyone explain this strange behaviour?
|
Don't use const char* as the key for maps. That means the map is ordered by the addresses of the strings, not the contents of the strings. Use a std::string as the key type, instead.
std::map is not a hash table, it's usually implemented as a red-black tree, and elements are guaranteed to be ordered by some criteria (by default, < comparison between keys).
|
137,282 | 139,329 |
How can I avoid the Diamond of Death when using multiple inheritance?
|
http://en.wikipedia.org/wiki/Diamond_problem
I know what it means, but what steps can I take to avoid it?
|
A practical example:
class A {};
class B : public A {};
class C : public A {};
class D : public B, public C {};
Notice how class D inherits from both B & C. But both B & C inherit from A. That will result in 2 copies of the class A being included in the vtable.
To solve this, we need virtual inheritance. It's class A that needs to be virtually inherited. So, this will fix the issue:
class A {};
class B : virtual public A {};
class C : virtual public A {};
class D : public B, public C {};
|
137,659 | 137,682 |
Persistence of std::map in C++
|
Do you know any easy or simple way to make a map object (from the STL library) persistent (i.e. write it to a file) so that you can recover its state later when the program in run later ??
Thanks for your help
|
I believe the Boost Serialization library is capable of serializing std::map, but the standard library itself provides no means. Serialization is a great library with a lot of features and is easy to use and to extend to your own types.
|
137,812 | 137,820 |
What's your favorite C++0x feature?
|
As many of us know (and many, many more don't), C++ is currently undergoing final drafting for the next revision of the International Standard, expected to be published in about 2 years. Drafts and papers are currently available from the committee website. All sorts of new features are being added, the biggest being concepts and lambdas. There is a very comprehensive Wikipedia article with many of the new features. GCC 4.3 and later implement some C++0x features.
As far as new features go, I really like type traits (and the appropriate concepts), but my definite leader is variadic templates. Until 0x, long template lists have involved Boost Preprocessor usually, and are very unpleasant to write. This makes things a lot easier and allows C++0x templates to be treated like a perfectly functional language using variadic templates. I've already written some very cool code with them already, and I can't wait to use them more often!
So what features are you most eagerly anticipating?
|
auto keyword for variable type inferencing
|
138,122 | 138,143 |
How to create a very big bitmap in C++/MFC / GDI
|
I'd like to be able to create a large (say 20,000 x 20,000) pixel bitmap in a C++ MFC application, using a CDC derived class to write to the bitmap. I've tried using memory DCs as described in the MSDN docs, but these appear to be restricted to sizes compatible with the current display driver.
I'm currently using a bitmap print driver to do the job, but it is extremely slow and uses very large amounts of intermediate storage due to spooling GDI information.
The solution I'm looking for should not involve metafiles or spooling, as the model that I am drawing takes many millions of GDI calls to render.
I could use a divide and conquer approach via multiple memory DCs, but it seems like a pretty cumborsome and inelegant technique.
any thoughts?
|
CDC and CBitmap appears to only support device dependant bitmaps, you might have more luck creating your bitmap with ::CreateDIBSection, then attaching a CBitmap to that. The raw GDI interfaces are a little hoary, unfortunately.
You probably won't have much luck with 20,000 x 20,000 at 32 BPP, at least in a 32-bit application, as that comes out at about 1.5 GB of memory, but I got a valid HBITMAP back with 16 bpp:
BITMAPINFOHEADER bmi = { sizeof(bmi) };
bmi.biWidth = 20000;
bmi.biHeight = 20000;
bmi.biPlanes = 1;
bmi.biBitCount = 16;
HDC hdc = CreateCompatibleDC(NULL);
BYTE* pbData = 0;
HBITMAP hbm = CreateDIBSection(hdc, (BITMAPINFO*)&bmi, DIB_RGB_COLORS, (void**)&pbData, NULL, 0);
DeleteObject(SelectObject(hdc, hbm));
|
138,153 | 3,761,803 |
Is ncurses available for windows?
|
Are there any ncurses libraries in C/C++ for Windows that emulate ncurses in native resizable Win32 windows (not in console mode)?
|
There's an ongoing effort for a PDCurses port:
http://www.mail-archive.com/[email protected]/msg00129.html
http://www.projectpluto.com/win32a.htm
|
138,361 | 138,406 |
How much faster is C++ than C#?
|
Or is it now the other way around?
From what I've heard there are some areas in which C# proves to be faster than C++, but I've never had the guts to test it by myself.
Thought any of you could explain these differences in detail or point me to the right place for information on this.
|
There is no strict reason why a bytecode based language like C# or Java that has a JIT cannot be as fast as C++ code. However C++ code used to be significantly faster for a long time, and also today still is in many cases. This is mainly due to the more advanced JIT optimizations being complicated to implement, and the really cool ones are only arriving just now.
So C++ is faster, in many cases. But this is only part of the answer. The cases where C++ is actually faster, are highly optimized programs, where expert programmers thoroughly optimized the hell out of the code. This is not only very time consuming (and thus expensive), but also commonly leads to errors due to over-optimizations.
On the other hand, code in interpreted languages gets faster in later versions of the runtime (.NET CLR or Java VM), without you doing anything. And there are a lot of useful optimizations JIT compilers can do that are simply impossible in languages with pointers. Also, some argue that garbage collection should generally be as fast or faster as manual memory management, and in many cases it is. You can generally implement and achieve all of this in C++ or C, but it's going to be much more complicated and error prone.
As Donald Knuth said, "premature optimization is the root of all evil". If you really know for sure that your application will mostly consist of very performance critical arithmetic, and that it will be the bottleneck, and it's certainly going to be faster in C++, and you're sure that C++ won't conflict with your other requirements, go for C++. In any other case, concentrate on first implementing your application correctly in whatever language suits you best, then find performance bottlenecks if it runs too slow, and then think about how to optimize the code. In the worst case, you might need to call out to C code through a foreign function interface, so you'll still have the ability to write critical parts in lower level language.
Keep in mind that it's relatively easy to optimize a correct program, but much harder to correct an optimized program.
Giving actual percentages of speed advantages is impossible, it largely depends on your code. In many cases, the programming language implementation isn't even the bottleneck. Take the benchmarks at http://benchmarksgame.alioth.debian.org/ with a great deal of scepticism, as these largely test arithmetic code, which is most likely not similar to your code at all.
|
138,466 | 138,505 |
Get address of current page in Internet Explorer from toolbar
|
I'm trying to wrap my head around creating a toolbar (a tool band in a rebar) in MFC for Internet Explorer using COM.
Is it possible to get the address of the currently viewed page (i.e., https://stackoverflow.com/questions/ask in my case :-) ) from the toolbar?
If so, what should I look in to?
Thanks!
|
You can use the IWebBrowser2::get_LocationURL method.
|
138,917 | 139,152 |
Visual Studio debugger slows down in in-line code
|
Since I upgraded to Visual Studio 2008 from vs2005, I have found a very annoying behaviour when debugging large projects. If I attempt to step into inline code, the debugger appears to lock up for tens of seconds. Each time that I step inside such a function, there is a similar pause. Has anyone experienced this and is anyone aware of a work around?
Postscript:
After learning that MS had a service pack for vs2008 and needing to get it because of other compiling issues, the problem that I was encountering with the debugger was resolved.
|
I used to get this - I think it's a bug with the 'Autos' debug window:
http://social.msdn.microsoft.com/Forums/en-US/vsdebug/thread/eabc58b1-51b2-49ce-b710-15e2bf7e7516/
|
138,952 | 139,733 |
Is there an implementation for Delphi:TClientDataSet in C++ for MVS?
|
I want to migrate from Embarcadero Delphi to Visual Studio, but without a TClientDataset class it is very difficult.
This class represents an in-memory dataset.
I can't find any class like TClientDataset.
Can anyone help me find something like this please?
|
Visual studio has DataSet and DataTable classes which are very close to what a TClientDataSet is in Delphi.
See http://msdn.microsoft.com/en-us/library/system.data.dataset.aspx
|
139,325 | 139,377 |
Setting all values in a std::map
|
How to set all the values in a std::map to the same value, without using a loop iterating over each value?
|
Using a loop is by far the simplest method. In fact, it’s a one-liner:[C++17]
for (auto& [_, v] : mymap) v = value;
Unfortunately C++ algorithm support for associative containers isn’t great pre-C++20. As a consequence, we can’t directly use std::fill.
To use them anyway (pre-C++20), we need to write adapters — in the case of std::fill, an iterator adapter. Here’s a minimally viable (but not really conforming) implementation to illustrate how much effort this is. I do not advise using it as-is. Use a library (such as Boost.Iterator) for a more general, production-strength implementation.
template <typename M>
struct value_iter : std::iterator<std::bidirectional_iterator_tag, typename M::mapped_type> {
using base_type = std::iterator<std::bidirectional_iterator_tag, typename M::mapped_type>;
using underlying = typename M::iterator;
using typename base_type::value_type;
using typename base_type::reference;
value_iter(underlying i) : i(i) {}
value_iter& operator++() {
++i;
return *this;
}
value_iter operator++(int) {
auto copy = *this;
i++;
return copy;
}
reference operator*() { return i->second; }
bool operator ==(value_iter other) const { return i == other.i; }
bool operator !=(value_iter other) const { return i != other.i; }
private:
underlying i;
};
template <typename M>
auto value_begin(M& map) { return value_iter<M>(map.begin()); }
template <typename M>
auto value_end(M& map) { return value_iter<M>(map.end()); }
With this, we can use std::fill:
std::fill(value_begin(mymap), value_end(mymap), value);
|
139,622 | 139,632 |
Does the CAutoPtr class implement reference counting?
|
Modern ATL/MFC applications now have access to a new shared pointer class called CAutoPtr, and associated containers (CAutoPtrArray, CAutoPtrList, etc.).
Does the CAutoPtr class implement reference counting?
|
Having checked the CAutoPtr source, no, reference counting is not supported. Using boost::shared_ptr instead if this ability is required.
|
139,668 | 140,387 |
Why would the Win32 OleGetClipboard() function return CLIPBRD_E_CANT_OPEN?
|
Under what circumstances will the Win32 API function OleGetClipboard() fail and return CLIPBRD_E_CANT_OPEN?
More background: I am assisting with a Firefox bug fix. Details here:
bug 444800 - cannot retrieve image data from clipboard in lossless format
In the automated test that I helped write, we see that OleGetClipboard() sometimes fails and returns CLIPBRD_E_CANT_OPEN. That is unexpected, and the Firefox code to pull image data off the Windows clipboard depends on that call succeeding.
|
The documentation says that OleGetClipboard can fail with this error code if OpenClipboard fails. In turn, if you read that documentation, it says:
"OpenClipboard fails if another window has the clipboard open."
It's an exclusive resource: only one window can have the clipboard open at a time. Basically, if you can't do it, wait a little while and try again.
|
139,705 | 673,642 |
Indirect Typelib not imported well from Debug dll
|
Using VC2005, I have 3 projects to build:
libA (contains a typelib, results in libA.dll): IDL has a line library libA { ...
libB (contains a typelib importing libA, results in libB.dll): IDL has a line importlib( "libA " );
libC (imports libB): one of the source files contains #import <libB.dll>
the #import <libB.dll> is handled by the compiler in the following way (according to documentation):
search directories of %PATH%
search directories of %LIB%
search the "additional include paths" (/I compiler option)
When compiling libC, I can see that cl.exe clearly is able to find the libA.dll on the executable path (using Filemon.exe)
VC error C4772: #import of typelib with another dependency
However, still the libA namespace is not found and all references to libA types are replaced by __missing_type__
(edit) Meanwhile, I found out the problem only appears when using the debug dlls.
Anyone seen this problem before? And solved it?
|
Finally Found It!
In the Visual Studio project, the A.idl file in LibA had the MkTypeLib Compatible setting ON. This overruled the behaviour inherited from the A project. To make things worse, it was only ON in the Debug configuration.
The consequence was that for every
typedef [public] tagE enum { cE1, cE2 } eE;
This resulted in the tagE not being defined in the resulting typelib. When LibB did it's import( "A.dll" ), all references to tagE were replaced with __missing_type__...
|
139,826 | 140,352 |
Track Data Execution Prevention (DEP)
|
When running one of our software, a tester was faced with the data execution prevention dialog of Windows.
We try to reproduce this situation on a developer computer for debugging purposes : with no success.
Does anyone know how to find what may cause the DEP protection to kill the application?
Is there any existing tools available for this?
|
The DEP dialog will typically only show when you try to execute code from a region that you're not marking as executable. This might be caused by 'thunks' in a library you're using, e.g. ATL windowing. This problem is fixed in ATL 8.0.
A stack-trashing bug - for example, a buffer overrun - can also cause this problem, by setting the return address to a location that isn't executable. This might not cause an access violation but instead weird behaviour, if DEP is turned off for the process or not available on the hardware.
It might also happen if you throw a C++ exception or raise an SEH exception, and your structured exception handlers have been trashed by a buffer overrun.
|
140,033 | 140,048 |
boost::shared_ptr standard container
|
Assume I have a class foo, and wish to use a std::map to store some boost::shared_ptrs, e.g.:
class foo;
typedef boost::shared_ptr<foo> foo_sp;
typeded std::map<int, foo_sp> foo_sp_map;
foo_sp_map m;
If I add a new foo_sp to the map but the key used already exists, will the existing entry be deleted? For example:
foo_sp_map m;
void func1()
{
foo_sp p(new foo);
m[0] = p;
}
void func2()
{
foo_sp p2(new foo);
m[0] = p2;
}
Will the original pointer (p) be freed when it is replaced by p2? I'm pretty sure it will be, but I thought it was worth asking/sharing.
|
First off, your question title says boost::auto_ptr, but you actually mean boost::shared_ptr
And yes, the original pointer will be freed (if there are no further shared references to it).
|
140,061 | 140,100 |
When to use dynamic vs. static libraries
|
When creating a class library in C++, you can choose between dynamic (.dll, .so) and static (.lib, .a) libraries. What is the difference between them and when is it appropriate to use which?
|
Static libraries increase the size of the code in your binary. They're always loaded and whatever version of the code you compiled with is the version of the code that will run.
Dynamic libraries are stored and versioned separately. It's possible for a version of the dynamic library to be loaded that wasn't the original one that shipped with your code if the update is considered binary compatible with the original version.
Additionally dynamic libraries aren't necessarily loaded -- they're usually loaded when first called -- and can be shared among components that use the same library (multiple data loads, one code load).
Dynamic libraries were considered to be the better approach most of the time, but originally they had a major flaw (google DLL hell), which has all but been eliminated by more recent Windows OSes (Windows XP in particular).
|
140,347 | 140,373 |
Win32/MFC Get window rect from client rect
|
I know there is a function somewhere that will accept a client rect and it will convert it into a window rect for you. I just can't find / remember it!
Does anyone know what it is?
It will do something similar to:
const CRect client(0, 0, 200, 200);
const CRect window = ClientRectToWindowRect(client);
SetWindowPos(...)
|
You're probably thinking of AdjustWindowRectEx(). Keep in mind, this is intended for use when creating a window - there's no guarantee that it will produce an accurate set of window dimensions for an existing window; for that, use GetWindowRect().
|
140,786 | 140,865 |
How to simplify this code (generates a random int between min and max base on unsigned int)?
|
The code is
return min + static_cast<int>(static_cast<double>(max - min + 1.0) *
(number / (UINT_MAX + 1.0)));
number is a random number obtained by rand_s. min and max are ints and represent minimum and maximum values (inclusive).
If you provide a solution not using unsigned int as a number, please also explain how to make it be random.
Please do not submit solutions using rand().
|
The static_cast<double> is redundant because the "+1.0"s will cause promotion to double anyway.
|
140,935 | 140,944 |
Partial class definition on C++?
|
Anyone knows if is possible to have partial class definition on C++ ?
Something like:
file1.h:
class Test {
public:
int test1();
};
file2.h:
class Test {
public:
int test2();
};
For me it seems quite useful for definining multi-platform classes that have common functions between them that are platform-independent because inheritance is a cost to pay that is non-useful for multi-platform classes.
I mean you will never have two multi-platform specialization instances at runtime, only at compile time. Inheritance could be useful to fulfill your public interface needs but after that it won't add anything useful at runtime, just costs.
Also you will have to use an ugly #ifdef to use the class because you can't make an instance from an abstract class:
class genericTest {
public:
int genericMethod();
};
Then let's say for win32:
class win32Test: public genericTest {
public:
int win32Method();
};
And maybe:
class macTest: public genericTest {
public:
int macMethod();
};
Let's think that both win32Method() and macMethod() calls genericMethod(), and you will have to use the class like this:
#ifdef _WIN32
genericTest *test = new win32Test();
#elif MAC
genericTest *test = new macTest();
#endif
test->genericMethod();
Now thinking a while the inheritance was only useful for giving them both a genericMethod() that is dependent on the platform-specific one, but you have the cost of calling two constructors because of that. Also you have ugly #ifdef scattered around the code.
That's why I was looking for partial classes. I could at compile-time define the specific platform dependent partial end, of course that on this silly example I still need an ugly #ifdef inside genericMethod() but there is another ways to avoid that.
|
This is not possible in C++, it will give you an error about redefining already-defined classes. If you'd like to share behavior, consider inheritance.
|
141,337 | 141,380 |
Should I store entire objects, or pointers to objects in containers?
|
Designing a new system from scratch. I'll be using the STL to store lists and maps of certain long-live objects.
Question: Should I ensure my objects have copy constructors and store copies of objects within my STL containers, or is it generally better to manage the life & scope myself and just store the pointers to those objects in my STL containers?
I realize this is somewhat short on details, but I'm looking for the "theoretical" better answer if it exists, since I know both of these solutions are possible.
Two very obvious disadvantage to playing with pointers:
1) I must manage allocation/deallocation of these objects myself in a scope beyond the STL.
2) I cannot create a temp object on the stack and add it to my containers.
Is there anything else I'm missing?
|
Since people are chiming in on the efficency of using pointers.
If you're considering using a std::vector and if updates are few and you often iterate over your collection and it's a non polymorphic type storing object "copies" will be more efficent since you'll get better locality of reference.
Otoh, if updates are common storing pointers will save the copy/relocation costs.
|
141,498 | 285,538 |
What open source C++ static analysis tools are available?
|
Java has some very good open source static analysis tools such as FindBugs, Checkstyle and PMD. Those tools are easy to use, very helpful, runs on multiple operating systems and free.
Commercial C++ static analysis products are available. Although having such products are great, the cost is just way too much for students and it is usually rather hard to get trial version.
The alternative is to find open source C++ static analysis tools that will run on multiple platforms (Windows and Unix). By using an open source tool, it could be modified to fit certain needs. Finding the tools has not been easy task.
Below is a short list of C++ static analysis tools that were found or suggested by others.
C++ Check http://sf.net/projects/cppcheck/
Oink http://danielwilkerson.com/oink/index.html
C and C++ Code Counter http://sourceforge.net/projects/cccc/
Splint (from answers)
Mozilla's Pork (from answers) (This is now part of Oink)
Mozilla's Dehydra (from answers)
Use option -Weffc++ for GNU g++ (from answers)
What are some other portable open source C++ static analysis tools that anyone knows of and can be recommended?
Some related links.
http://en.wikipedia.org/wiki/List_of_tools_for_static_code_analysis
http://www.chris-lott.org/resources/cmetrics/
A free tool to check C/C++ source code against a set of coding standards?
http://spinroot.com/static/
Choosing a static code analysis tool
|
Oink is a tool built on top of the Elsa C++ front-end. Mozilla's Pork is a fork of Elsa/Oink.
See: http://danielwilkerson.com/oink/index.html
|
141,752 | 141,772 |
Float values behaving differently across the release and debug builds
|
My application is generating different floating point values when I compile it in release mode and in debug mode. The only reason that I found out is I save a binary trace log and the one from the release build is ever so slightly off from the debug build, it looks like the bottom two bits of the 32 bit float values are different about 1/2 of the cases.
Would you consider this "difference" to be a bug or would this type of difference be expected. Would this be a compiler bug or an internal library bug.
For example:
LEFTPOS and SPACING are defined floating point values.
float def_x;
int xpos;
def_x = LEFTPOS + (xpos * (SPACING / 2));
The issue is in regards to the X360 compiler.
|
Release mode may have a different FP strategy set. There are different floating point arithmetic modes depending on the level of optimization you'd like. MSVC, for example, has strict, fast, and precise modes.
|
141,864 | 141,926 |
Operator overloading for C++ maps
|
I need help understanding some C++ operator overload statements. The class is declared like this:
template <class key_t, class ipdc_t>
class ipdc_map_template_t : public ipdc_lockable_t
{
...
typedef map<key_t,
ipdc_t*,
less<key_t>> map_t;
...
The creator of the class has created an iterator for the internal map structure:
struct iterator : public map_t::iterator
{
iterator() {}
iterator(const map_t::iterator & it)
: map_t::iterator(it) {}
iterator(const iterator & it)
: map_t::iterator(
*static_cast<const map_t::iterator *>(&it)) {}
operator key_t() {return ((this->operator*()).first);} // I don't understand this.
operator ipdc_t*() const {return ((this->operator*()).second);} // or this.
};
And begin() and end() return the begin() and end() of the map:
iterator begin() {IT_ASSERT(is_owner()); return map.begin();}
iterator end() {return map.end();}
My question is, if I have an iterator, how do I use those overloads to get the key and the value?
ipdc_map_template_t::iterator iter;
for( iter = my_instance.begin();
iter != my_instance.end();
++iter )
{
key_t my_key = ??????;
ipdc_t *my_value = ??????;
}
|
These are typecast operators, so you can do this:
{
key_t key = iter;
ipdc_t *val = iter;
}
Or, since ipdc_map_template::iterator is a subclass of std::map::iterator, you can still use the original accessors (which I find more readable):
{
key_t key = (*iter).first;
ipdc_t *val = (*iter).second;
// or, equivalently
key_t key = iter->first;
ipdc_t *val = iter->second;
}
|
142,016 | 142,023 |
C/C++ Structure offset
|
I'm looking for a piece of code that can tell me the offset of a field within a structure without allocating an instance of the structure.
IE: given
struct mstct {
int myfield;
int myfield2;
};
I could write:
mstct thing;
printf("offset %lu\n", (unsigned long)(&thing.myfield2 - &thing));
And get offset 4 for the output. How can I do it without that mstct thing declaration/allocating one?
I know that &<struct> does not always point at the first byte of the first field of the structure, I can account for that later.
|
How about the standard offsetof() macro (in stddef.h)?
Edit: for people who might not have the offsetof() macro available for some reason, you can get the effect using something like:
#define OFFSETOF(type, field) ((unsigned long) &(((type *) 0)->field))
|
142,240 | 142,402 |
Explicit code parallelism in c++
|
Out of order execution in CPUs means that a CPU can reorder instructions to gain better performance and it means the CPU is having to do some very nifty bookkeeping and such. There are other processor approaches too, such as hyper-threading.
Some fancy compilers understand the (un)interrelatedness of instructions to a limited extent, and will automatically interleave instruction flows (probably over a longer window than the CPU sees) to better utilise the processor. Deliberate compile-time interleaving of floating and integer instructions is another example of this.
Now I have highly-parallel task. And I typically have an ageing single-core x86 processor without hyper-threading.
Is there a straight-forward way to get my the body of my 'for' loop for this highly-parallel task to be interleaved so that two (or more) iterations are being done together? (This is slightly different from 'loop unwinding' as I understand it.)
My task is a 'virtual machine' running through a set of instructions, which I'll really simplify for illustration as:
void run(int num) {
for(int n=0; n<num; n++) {
vm_t data(n);
for(int i=0; i<data.len(); i++) {
data.insn(i).parse();
data.insn(i).eval();
}
}
}
So the execution trail might look like this:
data(1) insn(0) parse
data(1) insn(0) eval
data(1) insn(1) parse
...
data(2) insn(1) eval
data(2) insn(2) parse
data(2) insn(2) eval
Now, what I'd like is to be able to do two (or more) iterations explicitly in parallel:
data(1) insn(0) parse
data(2) insn(0) parse \ processor can do OOO as these two flow in
data(1) insn(0) eval /
data(2) insn(0) eval \ OOO opportunity here too
data(1) insn(1) parse /
data(2) insn(1) parse
I know, from profiling, (e.g. using Callgrind with --simulate-cache=yes), that parsing is about random memory accesses (cache missing) and eval is about doing ops in registers and then writing results back. Each step is several thousand instructions long. So if I can intermingle the two steps for two iterations at once, the processor will hopefully have something to do whilst the cache misses of the parse step are occurring...
Is there some c++ template madness to get this kind of explicit parallelism generated?
Of course I can do the interleaving - and even staggering - myself in code, but it makes for much less readable code. And if I really want unreadable, I can go so far as assembler! But surely there is some pattern for this kind of thing?
|
It sounds like you ran into the same problem chip designers face: Executing a single instruction takes a lot of effort, but it involves a bunch of different steps that can be strung together in an execution pipeline. (It is easier to execute things in parallel when you can build them out of separate blocks of hardware.)
The most obvious way is to split each task into different threads. You might want to create a single thread to execute each instruction to completion, or create one thread for each of your two execution steps and pass data between them. In either case, you'll have to be very careful with how you share data between threads and make sure to handle the case where one instruction affects the result of the following instruction. Even though you only have one core and only one thread can be running at any given time, your operating system should be able to schedule compute-intense threads while other threads are waiting for their cache misses.
(A few hours of your time would probably pay for a single very fast computer, but if you're trying to deploy it widely on cheap hardware it might make sense to consider the problem the way you're looking at it. Regardless, it's an interesting problem to consider.)
|
142,331 | 142,475 |
Resources for high performance computing in C++
|
I'm looking for books or online resources that go in detail over programming techniques for high performance computing using C++.
|
practically all HPC code I've heard of is either for solving sytems of linear equations or FFT's. Heres some links to start you off at least in the libraries used:
BLAS - standard set of routines for linear algebra - stuff like matrix multiplication
LAPACK - standard set of higher level linear algebra routines - stuff like LU decomp.
ATLAS - Optimized BLAS implementation
FFTW - Optimized FFT implementation
PBLAS - BLAS for distributed processors
SCALAPACK - distributed LAPACK implementation
MPI - Communications library for distributed systems.
PETSc - Scalable nonlinear and linear solvers (user-extensible, interface to much above)
|
142,391 | 142,401 |
Getting a boost::shared_ptr for this
|
I am making extensive use of boost:shared_ptr in my code. In fact, most of the objects that are allocated on the heap are held by a shared_ptr. Unfortunately this means that I can't pass this into any function that takes a shared_ptr. Consider this code:
void bar(boost::shared_ptr<Foo> pFoo)
{
...
}
void Foo::someFunction()
{
bar(this);
}
There are two problems here. First, this won't compile because the T* constructor for shared_ptr is explicit. Second, if I force it to build with bar(boost::shared_ptr<Foo>(this)) I will have created a second shared pointer to my object that will eventually lead to a double-delete.
This brings me to my question: Is there any standard pattern for getting a copy of the existing shared pointer you know exists from inside a method on one of those objects? Is using intrusive reference counting my only option here?
|
You can derive from enable_shared_from_this and then you can use "shared_from_this()" instead of "this" to spawn a shared pointer to your own self object.
Example in the link:
#include <boost/enable_shared_from_this.hpp>
class Y: public boost::enable_shared_from_this<Y>
{
public:
shared_ptr<Y> f()
{
return shared_from_this();
}
}
int main()
{
shared_ptr<Y> p(new Y);
shared_ptr<Y> q = p->f();
assert(p == q);
assert(!(p < q || q < p)); // p and q must share ownership
}
It's a good idea when spawning threads from a member function to boost::bind to a shared_from_this() instead of this. It will ensure that the object is not released.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.