question_id
int64 25
74.7M
| answer_id
int64 332
74.7M
| title
stringlengths 20
150
| question
stringlengths 23
4.1k
| answer
stringlengths 20
4.1k
|
---|---|---|---|---|
99,479 | 99,485 |
Visual C++/Studio: Application configuration incorrect?
|
My C(++) program, written and compiled using Visual C(++)/Visual Studio, runs fine on my own machine, but refuses to run on another machine. The error message I get is "This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem."
|
If you write a C++ program, it links dynamically to the C Runtime Library, or CRT for short. This library contains your printf, your malloc, your strtok, etcetera. The library is contained in the file called MSVCR80.DLL. This file is not by default installed on a Windows system, hence the application cannot run.
The solution? Either install the DLL on the target machine through VCREDIST.EXE (the Visual C++ Redistributable Package), or link to the CRT statically (plug the actual code for the used functions straight into your EXE).
Distributing and installing VCREDIST along with a simple application is a pain in the arse, so I went for the second option: static linking. It's really easy: go to your project's properties, unfold C/C++, click Code Generation, and set the Runtime Library to one of the non-DLL options. That's all there is to it.
|
99,510 | 99,532 |
Does several levels of base classes slow down a class/struct in c++?
|
Does having several levels of base classes slow down a class? A derives B derives C derives D derives F derives G, ...
Does multiple inheritance slow down a class?
|
Non-virtual function-calls have absolutely no performance hit at run-time, in accordance with the c++ mantra that you shouldn't pay for what you don't use.
In a virtual function call, you generally pay for an extra pointer lookup, no matter how many levels of inheritance, or number of base classes you have.
Of course this is all implementation defined.
Edit: As noted elsewhere, in some multiple inheritance scenarios, an adjustment to the 'this' pointer is required before making the call. Raymond Chen describes how this works for COM objects. Basically, calling a virtual function on an object that inherits from multiple bases can require an extra subtraction and a jmp instruction on top of the extra pointer lookup required for a virtual call.
|
99,552 | 99,575 |
Where do "pure virtual function call" crashes come from?
|
I sometimes notice programs that crash on my computer with the error: "pure virtual function call".
How do these programs even compile when an object cannot be created of an abstract class?
|
They can result if you try to make a virtual function call from a constructor or destructor. Since you can't make a virtual function call from a constructor or destructor (the derived class object hasn't been constructed or has already been destroyed), it calls the base class version, which in the case of a pure virtual function, doesn't exist.
class Base
{
public:
Base() { reallyDoIt(); }
void reallyDoIt() { doIt(); } // DON'T DO THIS
virtual void doIt() = 0;
};
class Derived : public Base
{
void doIt() {}
};
int main(void)
{
Derived d; // This will cause "pure virtual function call" error
}
See also Raymond Chen's 2 articles on the subject
|
99,623 | 99,787 |
How to draw in the nonclient area?
|
I'd like to be able to do some drawing to the right of the menu bar, in the nonclient area of a window.
Is this possible, using C++ / MFC?
|
Charlie hit on the answer with WM_NCPAINT. If you're using MFC, the code would look something like this:
// in the message map
ON_WM_NCPAINT()
// ...
void CMainFrame::OnNcPaint()
{
// still want the menu to be drawn, so trigger default handler first
Default();
// get menu bar bounds
MENUBARINFO menuInfo = {sizeof(MENUBARINFO)};
if ( GetMenuBarInfo(OBJID_MENU, 0, &menuInfo) )
{
CRect windowBounds;
GetWindowRect(&windowBounds);
CRect menuBounds(menuInfo.rcBar);
menuBounds.OffsetRect(-windowBounds.TopLeft());
// horrible, horrible icon-drawing code. Don't use this. Seriously.
CWindowDC dc(this);
HICON appIcon = (HICON)::LoadImage(AfxGetResourceHandle(), MAKEINTRESOURCE(IDR_MAINFRAME), IMAGE_ICON, 16, 16, LR_DEFAULTCOLOR);
::DrawIconEx(dc, menuBounds.right-18, menuBounds.top+2, appIcon, 0,0, 0, NULL, DI_NORMAL);
::DestroyIcon(appIcon);
}
}
|
100,221 | 100,530 |
Tools for finding unused function declarations?
|
Whilst refactoring some old code I realised that a particular header file was full of function declarations for functions long since removed from the .cpp file. Does anyone know of a tool that could find (and strip) these automatically?
|
You could if possible make a test.cpp file to call them all, the linker will flag the ones that have no code as unresolved, this way your test code only need compile and not worry about actually running.
|
100,444 | 100,501 |
How to set breakpoints on future shared libraries with a command flag
|
I'm trying to automate a gdb session using the --command flag. I'm trying to set a breakpoint on a function in a shared library (the Unix equivalent of a DLL) . My cmds.gdb looks like this:
set args /home/shlomi/conf/bugs/kde/font-break.txt
b IA__FcFontMatch
r
However, I'm getting the following:
shlomi:~/progs/bugs-external/kde/font-breaking$ gdb --command=cmds.gdb...
GNU gdb 6.8-2mdv2009.0 (Mandriva Linux release 2009.0)
Copyright (C) 2008 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "i586-mandriva-linux-gnu"...
(no debugging symbols found)
Function "IA__FcFontMatch" not defined.
Make breakpoint pending on future shared library load? (y or [n]) [answered N; input not from terminal]
So it doesn't set the breakpoint after all. How can I make it default to answer "y" to set breakpoints on pending future shared library load?
I recall that I was able to do something, but cannot recall what.
|
Replying to myself, I'd like to give the answer that someone gave me on IRC:
(gdb) apropos pending
actions -- Specify the actions to be taken at a tracepoint
set breakpoint -- Breakpoint specific settings
set breakpoint pending -- Set debugger's behavior regarding pending breakpoints
show breakpoint -- Breakpoint specific settings
show breakpoint pending -- Show debugger's behavior regarding pending breakpoints
And so set breakpoint pending on does the trick; it is used in cmds.gdb like e.g.
set breakpoint pending on
break <source file name>:<line number>
|
100,596 | 1,657,329 |
Best resources for converting C/C++ dll headers to Delphi?
|
A rather comprehensive site explaining the difficulties and solutions involved in using a dll written in c/c++ and the conversion of the .h header file to delphi/pascal was posted to a mailing list I was on recently, so I thought I'd share it, and invite others to post other useful resources for this, whether they be links, conversion tools, or book/paper titles.
One resource per answer please, so we'll end up with the most popular/best resources bubbling to the top.
|
Over at Rudy's Delphi Corner, he has an excellent article about the pitfalls of converting C/C++ to Delphi. In my opinion, this is essential information when attempting this task. Here is the description:
This article is meant for everyone who
needs to translate C/C++ headers to
Delphi. I want to share some of the
pitfalls you can encounter when
converting from C or C++. This article
is not a tutorial, just a discussion
of frequently encountered problem
cases. It is meant for the beginner as
well as for the more experienced
translator of C and C++.
He also wrote a "Conversion Helper Package" that installs into the Delphi IDE which aids in converting C/C++ code to Delphi:
(source: rvelthuis.de)
His other relevant articles on this topic include:
Using C++ Objects in Delphi
Using C object files in Delphi
|
100,854 | 100,929 |
Reuse define statement from .h file in C# code
|
I have C++ project (VS2005) which includes header file with version number in #define directive. Now I need to include exactly the same number in twin C# project. What is the best way to do it?
I'm thinking about including this file as a resource, then parse it at a runtime with regex to recover version number, but maybe there's a better way, what do you think?
I cannot move version outside .h file, also build system depends on it and the C# project is one which should be adapted.
|
You can achieve what you want in just a few steps:
Create a MSBuild Task - http://msdn.microsoft.com/en-us/library/t9883dzc.aspx
Update the project file to include a call to the task created prior to build
The task receives a parameter with the location of the header .h file you referred. It then extracts the version and put that version in a C# placeholder file you previously have created. Or you can think using AssemblyInfo.cs that normally holds versions if that is ok for you.
If you need extra information please feel free to comment.
|
101,046 | 146,788 |
Oracle OCI array fetch of simple data types?
|
I cannot understand the Oracle documentation. :-(
Does anybody know how to fetch multiple rows of simple data from Oracle via OCI?
I currently use OCIDefineByPos to define single variables (I only need to do this for simple integers -- SQLT_INT/4-byte ints) and then fetch a single row at a time with OCIStmtExecute/OCIStmtFetch2.
This is OK for small amounts of data but it takes around .5ms per row, so when reading a few ten thousand rows this is too slow.
I just don't understand the documentation for OCIBindArrayOfStruct. How can I fetch a few thousand rows at a time?
|
You can use OCIDefineArrayOfStruct to support fetching arrays of records. You do this by passing the base of the array to OCIDefineByPos, and use OCIDefineArrayOfStruct to tell Oracle about the size of the records (skip size). I believe that you then call OCIFetch telling it to fetch the array size.
An alternative is to set the statement attribute, OCI_ATTR_PREFETCH_ROWS, before it is executed. This tells Oracle how many rows to fetch at a time, it defaults to 1. Using this approach, Oracle makes fewer round trips and buffers the rows for you.
OCIBindArrayOfStruct is used with DML statements. It works in a similar fashion to OCIDefineArrayOfStruct except that it works with bind variables.
You can find sample code on the Oracle website.
|
101,267 | 102,747 |
Is there any way to define a constant value to Java at compile time
|
When I used to write libraries in C/C++ I got into the habit of having a method to return the compile date/time. This was always a compiled into the library so would differentiate builds of the library. I got this by returning a #define in the code:
C++:
#ifdef _BuildDateTime_
char* SomeClass::getBuildDateTime() {
return _BuildDateTime_;
}
#else
char* SomeClass::getBuildDateTime() {
return "Undefined";
}
#endif
Then on the compile I had a '-D_BuildDateTime_=Date' in the build script.
Is there any way to achieve this or similar in Java without needing to remember to edit any files manually or distributing any seperate files.
One suggestion I got from a co-worker was to get the ant file to create a file on the classpath and to package that into the JAR and have it read by the method.
Something like (assuming the file created was called 'DateTime.dat'):
// I know Exceptions and proper open/closing
// of the file are not done. This is just
// to explain the point!
String getBuildDateTime() {
return new BufferedReader(getClass()
.getResourceAsStream("DateTime.dat")).readLine();
}
To my mind that's a hack and could be circumvented/broken by someone having a similarly named file outside the JAR, but on the classpath.
Anyway, my question is whether there is any way to inject a constant into a class at compile time
EDIT
The reason I consider using an externally generated file in the JAR a hack is because this is) a library and will be embedded in client apps. These client apps may define their own classloaders meaning I can't rely on the standard JVM class loading rules.
My personal preference would be to go with using the date from the JAR file as suggested by serg10.
|
I would favour the standards based approach. Put your version information (along with other useful publisher stuff such as build number, subversion revision number, author, company details, etc) in the jar's Manifest File.
This is a well documented and understood Java specification. Strong tool support exists for creating manifest files (a core Ant task for example, or the maven jar plugin). These can help with setting some of the attributes automatically - I have maven configured to put the jar's maven version number, Subversion revision and timestamp into the manifest for me at build time.
You can read the contents of the manifest at runtime with standard java api calls - something like:
import java.util.jar.*;
...
JarFile myJar = new JarFile("nameOfJar.jar"); // various constructors available
Manifest manifest = myJar.getManifest();
Map<String,Attributes> manifestContents = manifest.getAttributes();
To me, that feels like a more Java standard approach, so will probably prove more easy for subsequent code maintainers to follow.
|
101,329 | 101,583 |
If classes with virtual functions are implemented with vtables, how is a class with no virtual functions implemented?
|
In particular, wouldn't there have to be some kind of function pointer in place anyway?
|
Non virtual member functions are really just a syntactic sugar as they are almost like an ordinary function but with access checking and an implicit object parameter.
struct A
{
void foo ();
void bar () const;
};
is basically the same as:
struct A
{
};
void foo (A * this);
void bar (A const * this);
The vtable is needed so that we call the right function for our specific object instance. For example, if we have:
struct A
{
virtual void foo ();
};
The implementation of 'foo' might approximate to something like:
void foo (A * this) {
void (*realFoo)(A *) = lookupVtable (this->vtable, "foo");
(realFoo)(this); // Make the call to the most derived version of 'foo'
}
|
101,604 | 101,640 |
Converting C++ code to HTML safe
|
I decided to try http://www.screwturn.eu/ wiki as a code snippet storage utility. So far I am very impressed, but what irkes me is that when I copy paste my code that I want to save, '<'s and '[' (http://en.wikipedia.org/wiki/Character_encodings_in_HTML#Character_references) invariably screw up the output as the wiki interprets them as either wiki or HTML tags.
Does anyone know a way around this? Or failing that, know of a simple utility that would take C++ code and convert it to HTML safe code?
|
Surround your code in <nowiki> .. </nowiki> tags.
|
102,009 | 102,044 |
When is it best to use the stack instead of the heap and vice versa?
|
In C++, when is it best to use the stack? When is it best to use the heap?
|
Use the stack when your variable will not be used after the current function returns. Use the heap when the data in the variable is needed beyond the lifetime of the current function.
|
102,283 | 102,341 |
What WPF C# Control is similar to a CWnd in C++?
|
What would be the best WPF control in C# (VS 2008) that you can place on a form that would allow you to do drawing similar to the "Paint" function for the CWnd class in C++? Also, that could display bitmaps, have a scroll bar, and the ability to accept user inputs (ie. MouseMove, Button Clicks, etc...). Basically all the functionality of a CWnd in a control on a WPF form?
|
The UIElement is the lowest level element that supports input and drawing. Although, using WPF, you really have to do a lot less manual drawing. Are you sure that you need to do this? Also, the scroll bar will never be inherit in your element. If you need scrolling behavior, just wrap your element in a ScrollViewer.
|
102,459 | 102,529 |
Why does std::stack use std::deque by default?
|
Since the only operations required for a container to be used in a stack are:
back()
push_back()
pop_back()
Why is the default container for it a deque instead of a vector?
Don't deque reallocations give a buffer of elements before front() so that push_front() is an efficient operation? Aren't these elements wasted since they will never ever be used in the context of a stack?
If there is no overhead for using a deque this way instead of a vector, why is the default for priority_queue a vector not a deque also? (priority_queue requires front(), push_back(), and pop_back() - essentially the same as for stack)
Updated based on the Answers below:
It appears that the way deque is usually implemented is a variable size array of fixed size arrays. This makes growing faster than a vector (which requires reallocation and copying), so for something like a stack which is all about adding and removing elements, deque is likely a better choice.
priority_queue requires indexing heavily, as every removal and insertion requires you to run pop_heap() or push_heap(). This probably makes vector a better choice there since adding an element is still amortized constant anyways.
|
As the container grows, a reallocation for a vector requires copying all the elements into the new block of memory. Growing a deque allocates a new block and links it to the list of blocks - no copies are required.
Of course you can specify that a different backing container be used if you like. So if you have a stack that you know is not going to grow much, tell it to use a vector instead of a deque if that's your preference.
|
103,280 | 103,926 |
Portable way to catch signals and report problem to the user
|
If by some miracle a segfault occurs in our program, I want to catch the SIGSEGV and let the user (possibly a GUI client) know with a single return code that a serious problem has occurred. At the same time I would like to display information on the command line to show which signal was caught.
Today our signal handler looks as follows:
void catchSignal (int reason) {
std :: cerr << "Caught a signal: " << reason << std::endl;
exit (1);
}
I can hear the screams of horror with the above, as I have read from this thread that it is evil to call a non-reentrant function from a signal handler.
Is there a portable way to handle the signal and provide information to users?
EDIT: Or at least portable within the POSIX framework?
|
This table lists all of the functions that POSIX guarantees to be async-signal-safe and so can be called from a signal handler.
By using the 'write' command from this table, the following relatively "ugly" solution hopefully will do the trick:
#include <csignal>
#ifdef _WINDOWS_
#define _exit _Exit
#else
#include <unistd.h>
#endif
#define PRINT_SIGNAL(X) case X: \
write (STDERR_FILENO, #X ")\n" , sizeof(#X ")\n")-1); \
break;
void catchSignal (int reason) {
char s[] = "Caught signal: (";
write (STDERR_FILENO, s, sizeof(s) - 1);
switch (reason)
{
// These are the handlers that we catch
PRINT_SIGNAL(SIGUSR1);
PRINT_SIGNAL(SIGHUP);
PRINT_SIGNAL(SIGINT);
PRINT_SIGNAL(SIGQUIT);
PRINT_SIGNAL(SIGABRT);
PRINT_SIGNAL(SIGILL);
PRINT_SIGNAL(SIGFPE);
PRINT_SIGNAL(SIGBUS);
PRINT_SIGNAL(SIGSEGV);
PRINT_SIGNAL(SIGTERM);
}
_Exit (1); // 'exit' is not async-signal-safe
}
EDIT: Building on windows.
After trying to build this one windows, it appears that 'STDERR_FILENO' is not defined. From the documentation however its value appears to be '2'.
#include <io.h>
#define STDIO_FILENO 2
EDIT: 'exit' should not be called from the signal handler either!
As pointed out by fizzer, calling _Exit in the above is a sledge hammer approach for signals such as HUP and TERM. Ideally, when these signals are caught a flag with "volatile sig_atomic_t" type can be used to notify the main program that it should exit.
The following I found useful in my searches.
Introduction To Unix Signals Programming
Extending Traditional Signals
|
103,298 | 103,381 |
How to convert a unmanaged double to a managed string?
|
From managed C++, I am calling an unmanaged C++ method which returns a double. How can I convert this double into a managed string?
|
I assume something like
(gcnew System::Double(d))->ToString()
|
103,358 | 103,511 |
C++ strings: UTF-8 or 16-bit encoding?
|
I'm still trying to decide whether my (home) project should use UTF-8 strings (implemented in terms of std::string with additional UTF-8-specific functions when necessary) or some 16-bit string (implemented as std::wstring). The project is a programming language and environment (like VB, it's a combination of both).
There are a few wishes/constraints:
It would be cool if it could run on limited hardware, such as computers with limited memory.
I want the code to run on Windows, Mac and (if resources allow) Linux.
I'll be using wxWidgets as my GUI layer, but I want the code that interacts with that toolkit confined in a corner of the codebase (I will have non-GUI executables).
I would like to avoid working with two different kinds of strings when working with user-visible text and with the application's data.
Currently, I'm working with std::string, with the intent of using UTF-8 manipulation functions only when necessary. It requires less memory, and seems to be the direction many applications are going anyway.
If you recommend a 16-bit encoding, which one: UTF-16? UCS-2? Another one?
|
I would recommend UTF-16 for any kind of data manipulation and UI.
The Mac OS X and Win32 API uses UTF-16, same for wxWidgets, Qt, ICU, Xerces, and others.
UTF-8 might be better for data interchange and storage.
See http://unicode.org/notes/tn12/.
But whatever you choose, I would definitely recommend against std::string with UTF-8 "only when necessary".
Go all the way with UTF-16 or UTF-8, but do not mix and match, that is asking for trouble.
|
103,480 | 103,594 |
iPhone programming - impressions, opinions?
|
I've been programming in C,C++,C# and a few other languages for many years, mainly for Windows and Linux but also embedded platforms. Recently started to do some iPhone programming as a side project so I'm using Apple platforms for the first time since my Apple II days. I'm wondering what other developers that are coming to Mac OSX, Xcode and iPhone SDK think. Here are my impressions, so far:
Mac OSX: very confusing, I tend to end up with too many open windows and don't know what's where. Luckily there's the bird's eye view, without it I'd be lost. With the shell at least there's all the familiar stuff so that helps me a lot.
Xcode: doesn't feel as good as VisualStudio or Eclipse, the two environments I'm familiar with. I think I could get used to it but I'm wondering if Apple wouldn't be better off with Eclipse. Before I found the setting where all the windows are stuck together I hated it, now I can tolerate it.
iPhone SDK: strange indeed. I understand Apple's desire to control their environment but in this day and age it just seems a little sleazy and they are missing out on so much by destroying developer goodwill.
Objective-C: I've known about it for years but never even took a look at it. The syntax is off-putting but I'm actually very intrigued by the language. I think it's an interesting third leg between C++ and C#, both of which I like a lot. Is there any chance Obj-C will break out of the Mac sandbox due to the uptick in the popularity of Apple technology?
Curious to read your thoughts,
Andrew
|
I'm in the same boat as you (somewhat). I've been developing in C# for 7 years, ever since .NET 1.0. Over the past couple weeks I've been teaching myself Cocoa and Objective-C. Here are my impressions (note for note with yours)
Agreed in that clutter can be a problem. I tend to use Spaces heavily when developing in XCode (put XCode in one space, Interface Builder in another space, Instruments in a third space). If you don't have Leopard (and thus, no spaces), then use Command-H to hide your active window. Using that tends to clean things up quite a bit (however it'd be nice if you could command-h automagically the current window when command-tab'ing to another app).
I'm liking XCode more and more. I hate Visual Studio - I find it to be unstable, slow, and well, just kind of a crappy IDE. Comparatively I've found XCode to be fast, stable, and I like how it organizes and filters your files. I'm not too up on my XCode shortcuts, but I'm hoping there's a way I can quick-switch from one class to another (similar to ctrl +n shortcut in ReSharper). Intellisense could be better with regards to how it displays to the user, but I really like how it essentially creates a template and you can ctrl + / to jump to the next argument in a message.
I'm hating the documentation in XCode. The help system sucks, and for whatever reason it never finds what I'm searching for. I end up just googling for anything I need to know... I hope they improve the documentation. This is my biggest beef right now.
Not quite there yet, as I'm going through the full Cocoa framework for Mac desktops. So far I'm really, really liking what I see. One thing I will say is that it would be nice if the iPhone SDK allowed for garbage collection...
Objective-C - I've never used it, this is my first foray into it. At first I was kinda wierded out by the syntax and the square brackets for messaging, but it's really growing on me. It's so quick to skim a method and see the message calls that method makes. The more I use it, the more Objective-C just feels nice... however templating/generics would be a welcome addition to the language.
All in all, my foray into Mac development has been enjoyable, and I'm excited to start working (today! yay!) on some actual mac/iphone projects.
|
103,512 | 103,868 |
Why use static_cast<int>(x) instead of (int)x?
|
I've heard that the static_cast function should be preferred to C-style or simple function-style casting. Is this true? Why?
|
The main reason is that classic C casts make no distinction between what we call static_cast<>(), reinterpret_cast<>(), const_cast<>(), and dynamic_cast<>(). These four things are completely different.
A static_cast<>() is usually safe. There is a valid conversion in the language, or an appropriate constructor that makes it possible. The only time it's a bit risky is when you cast down to an inherited class; you must make sure that the object is actually the descendant that you claim it is, by means external to the language (like a flag in the object). A dynamic_cast<>() is safe as long as the result is checked (pointer) or a possible exception is taken into account (reference).
A reinterpret_cast<>() (or a const_cast<>()) on the other hand is always dangerous. You tell the compiler: "trust me: I know this doesn't look like a foo (this looks as if it isn't mutable), but it is".
The first problem is that it's almost impossible to tell which one will occur in a C-style cast without looking at large and disperse pieces of code and knowing all the rules.
Let's assume these:
class CDerivedClass : public CMyBase {...};
class CMyOtherStuff {...} ;
CMyBase *pSomething; // filled somewhere
Now, these two are compiled the same way:
CDerivedClass *pMyObject;
pMyObject = static_cast<CDerivedClass*>(pSomething); // Safe; as long as we checked
pMyObject = (CDerivedClass*)(pSomething); // Same as static_cast<>
// Safe; as long as we checked
// but harder to read
However, let's see this almost identical code:
CMyOtherStuff *pOther;
pOther = static_cast<CMyOtherStuff*>(pSomething); // Compiler error: Can't convert
pOther = (CMyOtherStuff*)(pSomething); // No compiler error.
// Same as reinterpret_cast<>
// and it's wrong!!!
As you can see, there is no easy way to distinguish between the two situations without knowing a lot about all the classes involved.
The second problem is that the C-style casts are too hard to locate. In complex expressions it can be very hard to see C-style casts. It is virtually impossible to write an automated tool that needs to locate C-style casts (for example a search tool) without a full blown C++ compiler front-end. On the other hand, it's easy to search for "static_cast<" or "reinterpret_cast<".
pOther = reinterpret_cast<CMyOtherStuff*>(pSomething);
// No compiler error.
// but the presence of a reinterpret_cast<> is
// like a Siren with Red Flashing Lights in your code.
// The mere typing of it should cause you to feel VERY uncomfortable.
That means that, not only are C-style casts more dangerous, but it's a lot harder to find them all to make sure that they are correct.
|
104,009 | 104,449 |
How can I get full string value of variable in VC6 watch window?
|
I'm wanting to get the full value of a char[] variable in the VC6 watch window, but it only shows a truncated version. I can copy the value from a debug memory window, but that contains mixed lines of hex and string values. Surely there is a better way??
|
For large strings, you're pretty much stuck with the memory window - the tooltip would truncate eventually.
Fortunately, the memory window is easy to get data from - I tend to show it in 8-byte chunks so its easy to manage, find your string data and cut&paste the lot into a blank window, then use alt+drag to select columns and delete the hex values. Then start at the bottom of the string and continually page up/delete (the newline) to build your string (I use a macro for that bit).
I don't think there's any better way once you get long strings.
|
104,322 | 104,389 |
How do you install Boost on MacOS?
|
How do you install Boost on MacOS?
Right now I can't find bjam for the Mac.
|
Download MacPorts, and run the following command:
sudo port install boost
|
104,844 | 104,882 |
Default Printer in Unmanaged C++
|
I'm looking for a way to find the name of the Windows default printer using unmanaged C++ (found plenty of .NET examples, but no success unmanaged). Thanks.
|
The following works great for printing with the win32api from C++
char szPrinterName[255];
unsigned long lPrinterNameLength;
GetDefaultPrinter( szPrinterName, &lPrinterNameLength );
HDC hPrinterDC;
hPrinterDC = CreateDC("WINSPOOL\0", szPrinterName, NULL, NULL);
In the future instead of googling "unmanaged" try googling "win32 /subject/" or "win32 api /subject/"
|
104,959 | 105,032 |
Inspecting STL containers in Visual Studio debugging
|
If I have a std::vector or std::map variable, and I want to see the contents, it's a big pain to see the nth element while debugging. Is there a plugin, or some trick to making it easier to watch STL container variables while debugging (VS2003/2005/2008)?
|
For vectors, this thread on the msdn forums has a code snippet for setting a watch on a vector index that might help.
|
105,014 | 105,061 |
Does the 'mutable' keyword have any purpose other than allowing the variable to be modified by a const function?
|
A while ago I came across some code that marked a member variable of a class with the mutable keyword. As far as I can see it simply allows you to modify a variable in a const method:
class Foo
{
private:
mutable bool done_;
public:
void doSomething() const { ...; done_ = true; }
};
Is this the only use of this keyword or is there more to it than meets the eye? I have since used this technique in a class, marking a boost::mutex as mutable allowing const functions to lock it for thread-safety reasons, but, to be honest, it feels like a bit of a hack.
|
It allows the differentiation of bitwise const and logical const. Logical const is when an object doesn't change in a way that is visible through the public interface, like your locking example. Another example would be a class that computes a value the first time it is requested, and caches the result.
Since c++11 mutable can be used on a lambda to denote that things captured by value are modifiable (they aren't by default):
int x = 0;
auto f1 = [=]() mutable {x = 42;}; // OK
auto f2 = [=]() {x = 42;}; // Error: a by-value capture cannot be modified in a non-mutable lambda
|
105,252 | 105,339 |
How do I convert between big-endian and little-endian values in C++?
|
How do I convert between big-endian and little-endian values in C++?
For clarity, I have to translate binary data (double-precision floating point values and 32-bit and 64-bit integers) from one CPU architecture to another. This doesn't involve networking, so ntoh() and similar functions won't work here.
Note: The answer I accepted applies directly to compilers I'm targeting (which is why I chose it). However, there are other very good, more portable answers here.
|
If you're using Visual C++ do the following: You include intrin.h and call the following functions:
For 16 bit numbers:
unsigned short _byteswap_ushort(unsigned short value);
For 32 bit numbers:
unsigned long _byteswap_ulong(unsigned long value);
For 64 bit numbers:
unsigned __int64 _byteswap_uint64(unsigned __int64 value);
8 bit numbers (chars) don't need to be converted.
Also these are only defined for unsigned values they work for signed integers as well.
For floats and doubles it's more difficult as with plain integers as these may or not may be in the host machines byte-order. You can get little-endian floats on big-endian machines and vice versa.
Other compilers have similar intrinsics as well.
In GCC for example you can directly call some builtins as documented here:
uint32_t __builtin_bswap32 (uint32_t x)
uint64_t __builtin_bswap64 (uint64_t x)
(no need to include something). Afaik bits.h declares the same function in a non gcc-centric way as well.
16 bit swap it's just a bit-rotate.
Calling the intrinsics instead of rolling your own gives you the best performance and code density btw..
|
106,033 | 106,101 |
How do I call a .NET assembly from C/C++?
|
Suppose I am writing an application in C++ and C#. I want to write the low level parts in C++ and write the high level logic in C#. How can I load a .NET assembly from my C++ program and start calling methods and accessing the properties of my C# classes?
|
[Guid("123565C4-C5FA-4512-A560-1D47F9FDFA20")]
public interface IConfig
{
[DispId(1)]
string Destination{ get; }
[DispId(2)]
void Unserialize();
[DispId(3)]
void Serialize();
}
[ComVisible(true)]
[Guid("12AC8095-BD27-4de8-A30B-991940666927")]
[ClassInterface(ClassInterfaceType.None)]
public sealed class Config : IConfig
{
public Config()
{
}
public string Destination
{
get { return ""; }
}
public void Serialize()
{
}
public void Unserialize()
{
}
}
After that, you need to regasm your assembly. Regasm will add the necessary registry entries to allow your .NET component to be see as a COM Component. After, you can call your .NET Component in C++ in the same way as any other COM component.
|
106,117 | 106,170 |
GCC - "expected unqualified-id before ')' token"
|
Please bear with me, I'm just learning C++.
I'm trying to write my header file (for class) and I'm running into an odd error.
cards.h:21: error: expected unqualified-id before ')' token
cards.h:22: error: expected `)' before "str"
cards.h:23: error: expected `)' before "r"
What does "expected unqualified-id before ')' token" mean? And what am I doing wrong?
Edit: Sorry, I didn't post the entire code.
/*
Card header file
[Author]
*/
// NOTE: Lanugage Docs here http://www.cplusplus.com/doc/tutorial/
#define Card
#define Hand
#define AppError
#include <string>
using namespace std;
// TODO: Docs here
class Card { // line 17
public:
enum Suit {Club, Diamond, Spade, Heart};
enum Rank {Two, Three, Four, Five, Six, Seven, Eight, Nine,
Ten, Jack, Queen, King, Ace};
Card(); // line 22
Card(string str);
Card(Rank r, Suit s);
Edit: I'm just trying to compile the header file by itself using "g++ file.h".
Edit: Closed question. My code is working now. Thanks everyone!
Edit: Reopened question after reading Etiquette: Closing your posts
|
Your issue is your #define. You did #define Card, so now everywhere Card is seen as a token, it will be replaced.
Usually a #define Token with no additional token, as in #define Token Replace will use the value 1.
Remove the #define Card, it's making line 22 read: 1(); or ();, which is causing the complaint.
|
106,347 | 107,503 |
Secure a DLL file with a license file
|
What is the best way to secure the use/loading of a DLL with a license file?
|
A couple of things you might want to consider:
Check sum the DLL. Using a cryptographic hash function, you can store this inside the license file or inside the DLL. This provides a verification method to determined if my original DLL file is unhacked, or if it is the license file for this DLL. A few simple byte swapping techniques can quickly take your hash function off the beaten track (and thus not easy to reproduce).
Don't store you hash as a string, split it into unsigned shorts in different places.
As Larry said, a MAC address is fairly common. There are lots of examples of how to get that on The Code Project, but be aware it's easy to fake these days.
My suggestion, should be use private/public keys for license generation.
In short, modes of attack will be binary (modify the instructions of your DLL file) so protect against this, or key generation so make each license user, machine, and even the install specific.
|
106,412 | 106,423 |
Is there a good general method for debugging C++ macros?
|
In general, I occasionally have a chain of nested macros with a few preprocessor conditional elements in their definitions. These can be painful to debug since it's hard to directly see the actual code being executed.
A while ago I vaguely remember finding a compiler (gcc) flag to expand them, but I had trouble getting this to work in practice.
|
gcc -E will output the preprocessed source to stdout.
|
106,470 | 106,796 |
Changing the default settings for a console application
|
I would prefer that a console app would default to
multithreaded debug.
warning level 4.
build browse information.
no resource folder.
Does anyone know of any technique that would allow me to create a console app, with my desired options, without manually setting it.
|
Yes, you can do that. What you want is to create your own project template. You can then select that template from the New Project wizard. I wasn't able to location documentation on how to create a project template in Visual Studio 6, but this MSDN article explains the procedure for Visual Studio 2005. Hopefully you will find those instructions to sufficiently similar.
|
106,862 | 108,160 |
Any experiences with Intel's Threading Building Blocks?
|
Intel's Threading Building Blocks (TBB) open source library looks really interesting. Even though there's even an O'Reilly Book about the subject I don't hear about a lot of people using it. I'm interested in using it for some multi-level parallel applications (MPI + threads) in Unix (Mac, Linux, etc.) environments. For what it's worth, I'm interested in high performance computing / numerical methods kinds of applications.
Does anyone have experiences with TBB? Does it work well? Is it fairly portable (including GCC and other compilers)? Does the paradigm work well for programs you've written? Are there other libraries I should look into?
|
I've introduced it into our code base because we needed a bettor malloc to use when we moved to a 16 core machine. With 8 and under it wasn't a significant issue. It has worked well for us. We plan on using the fine grained concurrent containers next. Ideally we can make use of the real meat of the product, but that requires rethinking how we build our code. I really like the ideas in TBB, but it's not easy to retrofit onto a code base.
You can't think of TBB as another threading library. They have a whole new model that really sits on top of threads and abstracts the threads away. You learn to think in task, parallel_for type operations and pipelines. If I were to build a new project I would probably try to model it in this fashion.
We work in Visual Studio and it works just fine. It was originally written for linux/pthreads so it runs just fine over there also.
|
107,294 | 107,301 |
Change pointer to an array to get a specific array element
|
I understand the overall meaning of pointers and references(or at least I think i do), I also understand that when I use new I am dynamically allocating memory.
My question is the following:
If i were to use cout << &p, it would display the "virtual memory location" of p.
Is there a way in which I could manipulate this "virtual memory location?"
For example, the following code shows an array of ints.
If I wanted to show the value of p[1] and I knew the "virtual memory location" of p, could I somehow do "&p + 1" and obtain the value of p[1] with cout << *p, which will now point to the second element in the array?
int *p;
p = new int[3];
p[0] = 13;
p[1] = 54;
p[2] = 42;
|
Sure, you can manipulate the pointer to access the different elements in the array, but you will need to manipulate the content of the pointer (i.e. the address of what p is pointing to), rather than the address of the pointer itself.
int *p = new int[3];
p[0] = 13;
p[1] = 54;
p[2] = 42;
cout << *p << ' ' << *(p+1) << ' ' << *(p+2);
Each addition (or subtraction) mean the subsequent (prior) element in the array. If p points to a 4 byte variable (e.g. int on typical 32-bits PCs) at address say 12345, p+1 will point to 12349, and not 12346. Note you want to change the value of what p contains before dereferencing it to access what it points to.
|
107,549 | 107,564 |
GCC compiling a dll with __stdcall
|
When we compile a dll using __stdcall inside visual studio 2008 the compiled function names inside the dll are.
FunctionName
Though when we compile the same dll using GCC using wx-dev-cpp GCC appends the number of paramers the function has, so the name of the function using Dependency walker looks like.
FunctionName@numberOfParameters or == FunctionName@8
How do you tell GCC compiler to remove @nn from exported symbols in the dll?
|
__stdcall decorates the function name by adding an underscore to the start, and the number of bytes of parameters to the end (separated by @).
So, a function:
void __stdcall Foo(int a, int b);
...would become _Foo@8.
If you list the function name (undecorated) in the EXPORTS section of your .DEF file, it is exported undecorated.
Perhaps this is the difference?
|
107,591 | 107,859 |
Unit testing MFC UI applications?
|
How do you unit test a large MFC UI application?
We have a few large MFC applications that have been in development for many years, we use some standard automated QA tools to run basic scripts to check fundamentals, file open etc. These are run by the QA group post the daily build.
But we would like to introduce procedures such that individual developers can build and run tests against dialogs, menus, and other visual elements of the application before submitting code to the daily build.
I have heard of such techniques as hidden test buttons on dialogs that only appear in debug builds, are there any standard toolkits for this.
Environment is C++/C/FORTRAN, MSVC 2005, Intel FORTRAN 9.1, Windows XP/Vista x86 & x64.
|
It depends on how the App is structured. If logic and GUI code is separated (MVC) then testing the logic is easy. Take a look at Michael Feathers "Humble Dialog Box" (PDF).
EDIT: If you think about it: You should very carefully refactor if the App is not structured that way. There is no other technique for testing the logic. Scripts which simulate clicks are just scratching the surface.
It is actually pretty easy:
Assume your control/window/whatever changes the contents of a listbox when the user clicks a button and you want to make sure the listbox contains the right stuff after the click.
Refactor so that there is a separate list with the items for the listbox to show. The items are stored in the list and are not extracted from whereever your data comes from. The code that makes the listbox list things knows only about the new list.
Then you create a new controller object which will contain the logic code. The method that handles the button click only calls mycontroller->ButtonWasClicked(). It does not know about the listbox or anythings else.
MyController::ButtonWasClicked() does whats need to be done for the intended logic, prepares the item list and tells the control to update. For that to work you need to decouple the controller and the control by creating a interface (pure virtual class) for the control. The controller knows only an object of that type, not the control.
Thats it. The controller contains the logic code and knows the control only via the interface. Now you can write regular unit test for MyController::ButtonWasClicked() by mocking the control. If you have no idea what I am talking about, read Michaels article. Twice. And again after that.
(Note to self: must learn not to blather that much)
|
107,616 | 108,032 |
XML-RPC: best way to handle 64-bit values?
|
So the official XML-RPC standard doesn't support 64-bit values. But in these modern times, 64-bit values are increasingly common.
How do you handle these? What XML-RPC extensions are the most common? What language bindings are there? I'm especially interested in Python and C++, but all information is appreciated.
|
Some libraries support 64 bits extensions, indeed, but there doesn't seem to be a standard. xmlrpc-c, for example, has a so called i8 but it doesn't work with python (at least not by default).
I would recommend to either:
Convert the integer to string by hand and send it as such. XMLRPC will convert it to string anyway, so I would say this is reasonable.
Break it in two 32 bits integers and send it as such.
|
108,047 | 108,060 |
Whats the best Ribbon UI control to retro fit to a legacy MFC application build with VC2005?
|
What experience have you had with introducing a Ribbon style control to legacy MFC applications?
I know it exists in the new VC2008 Feature Pack, but changing compilers from VC2005 is a big deal for our source base and integration to our environment, Intel FORTRAN, ClearCase, many 3rd libraries.
There are quiet a few different commerical implementations, most focusing on C#/VB .NET, and only a few for native C++ MFC.
I have read all the usual reviews found by Google most are quiet old now, so I am interested to here from people who have actually done it, been through the pain barrier, released a legacy application with VC2005 and a Ribbon UI.
We currently use a very old version of Stingray Objective Toolkit to provide our MFC extensions like customizable toolbars and docking windows etc.
Any one used Prof-UIS, compared to the other commercial ones its relatively cheap, unlimited developer licensing is a 10th the cost of the others.
Are there any free, open source or L-GPL'd ones available?
|
In my projects I'm using the MFC Feature Pack in Visual Studio 2008, which is based on code from BCGSoft. Their BCGControlBar Library Professional Edition includes a ribbon control and is compatible with Visual Studio 2005.
I'm not aware of any open source ribbon control libraries for C++, though.
|
108,518 | 108,575 |
RSA encryption library for C++
|
I am developing a Win32 application and I would like to use an RSA encryption library. Which library would you recommend?
|
I use the following library:
http://www.efgh.com/software/rsa.htm
It's public domain, compact, self contained, and does the work well.
|
108,768 | 108,781 |
Needless pointer-casts in C
|
I got a comment to my answer on this thread:
Malloc inside a function call appears to be getting freed on return?
In short I had code like this:
int * somefunc (void)
{
int * temp = (int*) malloc (sizeof (int));
temp[0] = 0;
return temp;
}
I got this comment:
Can I just say, please don't cast the
return value of malloc? It is not
required and can hide errors.
I agree that the cast is not required in C. It is mandatory in C++, so I usually add them just in case I have to port the code in C++ one day.
However, I wonder how casts like this can hide errors. Any ideas?
Edit:
Seems like there are very good and valid arguments on both sides. Thanks for posting, folks.
|
It seems fitting I post an answer, since I left the comment :P
Basically, if you forget to include stdlib.h the compiler will assume malloc returns an int. Without casting, you will get a warning. With casting you won't.
So by casting you get nothing, and run the risk of suppressing legitimate warnings.
Much is written about this, a quick google search will turn up more detailed explanations.
edit
It has been argued that
TYPE * p;
p = (TYPE *)malloc(n*sizeof(TYPE));
makes it obvious when you accidentally don't allocate enough memory because say, you thought p was TYPe not TYPE, and thus we should cast malloc because the advantage of this method overrides the smaller cost of accidentally suppressing compiler warnings.
I would like to point out 2 things:
you should write p = malloc(sizeof(*p)*n); to always ensure you malloc the right amount of space
with the above approach, you need to make changes in 3 places if you ever change the type of p: once in the declaration, once in the malloc, and once in the cast.
In short, I still personally believe there is no need for casting the return value of malloc and it is certainly not best practice.
|
109,129 | 109,161 |
Dynamically created operators
|
I created a program using dev-cpp and wxwidgets which solves a puzzle.
The user must fill the operations blocks and the results blocks, and the program will solve it. I'm solving it using brute force, I generate all non-repeated 9 length number combinations using a recursive algorithm. It does it pretty fast.
Up to here all is great!
But the problem is when my program operates depending the character on the blocks. Its extremely slow (it never gets the answer), because of the chars comparation against +, -, *, etc. I'm doing a CASE.
Is there some way or some programming language which allows dynamic creation of operators? So I can define the operator ROW1COL2 to be a +, and the same way to all other operations.
I leave a screenshot of the app, so its easier to understand how the puzzle works.
http://www.imageshare.web.id/images/9gg5cev8vyokp8rhlot9.png
PD: The algorithm works, I tried it with a trivial puzzle, and solved it in a second.
|
Not sure that this is really what you're looking for but..
Any Object Oriented language such as C++ or C# will allow you to create an "Operator" base class and then to derive from this base class a "PlusOperator" or "MinusOperator" etc'. this is the standard way to avoid such case statements.
However I am not sure this will solve your performance problem.
Using plain brute force for such a problem will result you in an exponential solution. this will seem to work fast for small input - say completing all the numbers. But if you want to complete the operations its a much larger problem with alot more possibilities.
So its likely that even without the CASE your program is not going to be able to solve it.
The right way to try to solve this kind of problems is using some advanced search methods which use some Heuristic function. See the A* (A-star) algorithm for example.
Good luck!
|
109,317 | 109,341 |
Why use c strings in c++?
|
Is there any good reason to use C-strings in C++ nowadays? My textbook uses them in examples at some points, and I really feel like it would be easier just to use a std::string.
|
The only reasons I've had to use them is when interfacing with 3rd party libraries that use C style strings. There might also be esoteric situations where you would use C style strings for performance reasons, but more often than not, using methods on C++ strings is probably faster due to inlining and specialization, etc.
You can use the c_str() method in many cases when working with those sort of APIs, but you should be aware that the char * returned is const, and you should not modify the string via that pointer. In those sort of situations, you can still use a vector<char> instead, and at least get the benefit of easier memory management.
|
109,449 | 109,522 |
Getting a FILE* from a std::fstream
|
Is there a (cross-platform) way to get a C FILE* handle from a C++ std::fstream ?
The reason I ask is because my C++ library accepts fstreams and in one particular function I'd like to use a C library that accepts a FILE*.
|
The short answer is no.
The reason, is because the std::fstream is not required to use a FILE* as part of its implementation. So even if you manage to extract file descriptor from the std::fstream object and manually build a FILE object, then you will have other problems because you will now have two buffered objects writing to the same file descriptor.
The real question is why do you want to convert the std::fstream object into a FILE*?
Though I don't recommend it, you could try looking up funopen().
Unfortunately, this is not a POSIX API (it's a BSD extension) so its portability is in question. Which is also probably why I can't find anybody that has wrapped a std::stream with an object like this.
FILE *funopen(
const void *cookie,
int (*readfn )(void *, char *, int),
int (*writefn)(void *, const char *, int),
fpos_t (*seekfn) (void *, fpos_t, int),
int (*closefn)(void *)
);
This allows you to build a FILE object and specify some functions that will be used to do the actual work. If you write appropriate functions you can get them to read from the std::fstream object that actually has the file open.
|
110,015 | 110,364 |
Not getting event arguments in IHTMLElement event handler
|
I've added a callback to an IHTMLElement instance but when the IDispatch::Invoke is called for the event, there are never any arguments (i.e. the pDispParams->cArgs and pDispParams->cNamedArgs are always 0). For example, I add a callback for an onmouseup event. From what I can tell, a callback for this event is supposed to receive a MouseEvent object. Is that correct? If so, what do I need to do to ensure this happens?
This is using the MSHTML for IE 6 sp2 (or better) on Windows XP SP2.
|
Events arguments for all DOM events including onmouseup are stored in the parent window's event property (IHTMLWindow2::event)
If you don't already have the parent window cached, IHTMLElement has a document property which returns an IHTMLDocument interface. From that you can query for IHTMLDocument2 which has a parentWindow property. The IHTMLWindow2 that is returned has the event property you're looking for. You should be able to query for the event interface from there.
|
110,157 | 110,255 |
How to retrieve all keys (or values) from a std::map and put them into a vector?
|
This is one of the possible ways I come out:
struct RetrieveKey
{
template <typename T>
typename T::first_type operator()(T keyValuePair) const
{
return keyValuePair.first;
}
};
map<int, int> m;
vector<int> keys;
// Retrieve all keys
transform(m.begin(), m.end(), back_inserter(keys), RetrieveKey());
// Dump all keys
copy(keys.begin(), keys.end(), ostream_iterator<int>(cout, "\n"));
Of course, we can also retrieve all values from the map by defining another functor RetrieveValues.
Is there any other way to achieve this easily? (I'm always wondering why std::map does not include a member function for us to do so.)
|
While your solution should work, it can be difficult to read depending on the skill level of your fellow programmers. Additionally, it moves functionality away from the call site. Which can make maintenance a little more difficult.
I'm not sure if your goal is to get the keys into a vector or print them to cout so I'm doing both. You may try something like this:
std::map<int, int> m;
std::vector<int> key, value;
for(std::map<int,int>::iterator it = m.begin(); it != m.end(); ++it) {
key.push_back(it->first);
value.push_back(it->second);
std::cout << "Key: " << it->first << std::endl();
std::cout << "Value: " << it->second << std::endl();
}
Or even simpler, if you are using Boost:
map<int,int> m;
pair<int,int> me; // what a map<int, int> is made of
vector<int> v;
BOOST_FOREACH(me, m) {
v.push_back(me.first);
cout << me.first << "\n";
}
Personally, I like the BOOST_FOREACH version because there is less typing and it is very explicit about what it is doing.
|
110,393 | 110,425 |
TinyXML: Save document to char * or string
|
I'm attempting to use TinyXML to read and save from memory, instead of only reading and saving files to disk.
It seems that the documnent's parse function can load a char *. But then I need to save the document to a char * when I'm done with it. Does anyone know about this?
Edit: The printing & streaming functions aren't what I'm looking for. They output in a viewable format, I need the actual xml content.
Edit: Printing is cool.
|
I'm not familiar with TinyXML, but from the documentation it seems that by using operator << to a C++ stream (so you can use C++ string streams) or a TiXMLPrinter class you can get an STL string without using a file. See TinyXML documentation (look for the "Printing" section)
|
110,436 | 110,608 |
Cleaning up Legacy Code "header spaghetti"
|
Any recommended practices for cleaning up "header spaghetti" which is causing extremely
slow compilation times (Linux/Unix)?
Is there any equvalent to "#pragma once" with GCC?
(found conflicting messages regarding this)
Thanks.
|
Assuming you're familiar with "include guards" (#ifdef at the begining of the header..), an additional way of speeding up build time is by using external include guards.
It was discussed in "Large Scale C++ Software Design". The idea is that classic include guards, unlike #pragma once, do not spare you the preprocessor parsing required to ignore the header from the 2nd time on (i.e. it still has to parse and look for the start and end of the include guard. With external include guards you place the #ifdef's around the #include line itself.
So it looks like this:
#ifndef MY_HEADER
#include "myheader.h"
#endif
and of course within the H file you have the classic include guard
#ifndef MY_HEADER
#define MY_HEADER
// content of header
#endif
This way the myheader.h file isn't even opened / parsed by the preprocessor, and it can save you a lot of time in large projects, especially when header files sit on shared remote locations, as they sometimes do.
again, it's all in that book. hth
|
110,833 | 110,856 |
Dynamically importing a C++ class from a DLL
|
What is the correct way to import a C++ class from a DLL? We're using Visual C++.
There's the dllexport/exports.def+LoadLibrary+GetProcAddress trifecta, but it doesn't work on C++ classes, only C functions. Is this due to C++ name-mangling? How do I make this work?
|
Found the solution at http://www.codeproject.com/KB/DLL/XDllPt4.aspx
Thanks for your efforts guys & girls
|
111,023 | 111,078 |
How to get a full call stack in Visual Studio 2005?
|
How can I get a full call stack for a c++ application developed with Visual Studio 2005? I would like to have a full call stack including the code in the system libraries.
Do I have to change some settings in Visual Studio, or do I have to install additional software?
|
Get debug information for all project dependencies. This is specified under the "Configuration Properties -> C/C++ -> General" section of the project properties.
On the menu, go to "Tools -> Options" then select "Debugging -> Symbols".
Add a new symbol location (the folder icon) that points to Microsoft's free symbol server “symsrvsymsrv.dllc:\symbols*http://msdl.microsoft.com/downloads/symbols“
Fill out the "cache symbols" field with some place locally so you don't go to the internet all the time.
|
111,391 | 111,399 |
Is it a problem if multiple different accepting sockets use the same OpenSSL context?
|
Is it OK if the same OpenSSL context is used by several different accepting sockets?
In particular I'm using the same boost::asio::ssl::context with 2 different listening sockets.
|
Yep, SSL_CTX--which I believe is the underlying data structure--is just a global data structure used by your program. From ssl(3):
SSL_CTX (SSL Context)
That's the global context structure which is created by a server or client once per program life-time and which holds mainly default values for the SSL structures which are later created for the connections.
|
111,415 | 111,479 |
Strange call stack, could it be problem in asio's usage of openssl?
|
I have this strange call stack and I am stumped to understand why.
It seems to me that asio calls open ssl's read and then gets a negative return value (-37) .
Asio seems to then try to use it inside the memcpy function.
The function that causes this call stack is used hunderds of thousands of times without this error.
It happens only rarely, about once a week.
ulRead = (boost::asio::read(spCon->socket(), boost::asio::buffer(_requestHeader, _requestHeader.size()), boost::asio::transfer_at_least(_requestHeader.size()), error_));
Note that request header's size is exactly 3 bytes always.
Could anyone shed some light on possible reasons?
Note: I'm using boost asio 1.36
Here is the crashing call stack crash happens in memcpy because of the huge "count":
|
A quick look at evp_lib.c shows that it tries to pull a length from the cipher context, and in your case gets a Very Bad Value(tm). It then uses this value to copy a string (which does the memcpy). My guess is something is trashing your cipher, be it a thread safety problem, or a reading more bytes into a buffer than allowed.
Relevant source:
int EVP_CIPHER_set_asn1_iv(EVP_CIPHER_CTX *c, ASN1_TYPE *type)
{
int i=0,j;
if (type != NULL)
{
j=EVP_CIPHER_CTX_iv_length(c);
OPENSSL_assert(j <= sizeof c->iv);
i=ASN1_TYPE_set_octetstring(type,c->oiv,j);
}
return(i);
}
|
111,478 | 111,531 |
Why is it wrong to use std::auto_ptr<> with standard containers?
|
Why is it wrong to use std::auto_ptr<> with standard containers?
|
The C++ Standard says that an STL element must be "copy-constructible" and "assignable." In other words, an element must be able to be assigned or copied and the two elements are logically independent. std::auto_ptr does not fulfill this requirement.
Take for example this code:
class X
{
};
std::vector<std::auto_ptr<X> > vecX;
vecX.push_back(new X);
std::auto_ptr<X> pX = vecX[0]; // vecX[0] is assigned NULL.
To overcome this limitation, you should use the std::unique_ptr, std::shared_ptr or std::weak_ptr smart pointers or the boost equivalents if you don't have C++11. Here is the boost library documentation for these smart pointers.
|
111,630 | 111,661 |
Using the Window API, how do I ensure controls retain a native appearance?
|
Some of the controls I've created seem to default to the old Windows 95 theme, how do I prevent this? Here's an example of a button that does not retain the Operating System's native appearance (I'm using Vista as my development environment):
HWND button = CreateWindowEx(NULL, L"BUTTON", L"OK", WS_VISIBLE | WS_CHILD | BS_PUSHBUTTON,
170, 340, 80, 25, hwnd, NULL, GetModuleHandle(NULL), NULL);
I'm using native C++ with the Windows API, no managed code.
|
I believe it has got nothing to do with your code, but you need to set up a proper manifest file to get the themed controls.
Some info here: @msdn.com and here: @blogs.msdn.com
You can see a difference between application with and without manifest here: heaventools.com
|
112,085 | 112,100 |
Is this C++ structure initialization trick safe?
|
Instead of having to remember to initialize a simple 'C' structure, I might derive from it and zero it in the constructor like this:
struct MY_STRUCT
{
int n1;
int n2;
};
class CMyStruct : public MY_STRUCT
{
public:
CMyStruct()
{
memset(this, 0, sizeof(MY_STRUCT));
}
};
This trick is often used to initialize Win32 structures and can sometimes set the ubiquitous cbSize member.
Now, as long as there isn't a virtual function table for the memset call to destroy, is this a safe practice?
|
PREAMBLE:
While my answer is still Ok, I find litb's answer quite superior to mine because:
It teaches me a trick that I did not know (litb's answers usually have this effect, but this is the first time I write it down)
It answers exactly the question (that is, initializing the original struct's part to zero)
So please, consider litb's answer before mine. In fact, I suggest the question's author to consider litb's answer as the right one.
Original answer
Putting a true object (i.e. std::string) etc. inside will break, because the true object will be initialized before the memset, and then, overwritten by zeroes.
Using the initialization list doesn't work for g++ (I'm surprised...). Initialize it instead in the CMyStruct constructor body. It will be C++ friendly:
class CMyStruct : public MY_STRUCT
{
public:
CMyStruct() { n1 = 0 ; n2 = 0 ; }
};
P.S.: I assumed you did have no control over MY_STRUCT, of course. With control, you would have added the constructor directly inside MY_STRUCT and forgotten about inheritance. Note that you can add non-virtual methods to a C-like struct, and still have it behave as a struct.
EDIT: Added missing parenthesis, after Lou Franco's comment. Thanks!
EDIT 2 : I tried the code on g++, and for some reason, using the initialization list does not work. I corrected the code using the body constructor. The solution is still valid, though.
Please reevaluate my post, as the original code was changed (see changelog for more info).
EDIT 3 : After reading Rob's comment, I guess he has a point worthy of discussion: "Agreed, but this could be an enormous Win32 structure which may change with a new SDK, so a memset is future proof."
I disagree: Knowing Microsoft, it won't change because of their need for perfect backward compatibility. They will create instead an extended MY_STRUCTEx struct with the same initial layout as MY_STRUCT, with additionnal members at the end, and recognizable through a "size" member variable like the struct used for a RegisterWindow, IIRC.
So the only valid point remaining from Rob's comment is the "enormous" struct. In this case, perhaps a memset is more convenient, but you will have to make MY_STRUCT a variable member of CMyStruct instead of inheriting from it.
I see another hack, but I guess this would break because of possible struct alignment problem.
EDIT 4: Please take a look at Frank Krueger's solution. I can't promise it's portable (I guess it is), but it is still interesting from a technical viewpoint because it shows one case where, in C++, the "this" pointer "address" moves from its base class to its inherited class.
|
112,277 | 112,302 |
Best introduction to C++ template metaprogramming?
|
Static metaprogramming (aka "template metaprogramming") is a great C++ technique that allows the execution of programs at compile-time. A light bulb went off in my head as soon as I read this canonical metaprogramming example:
#include <iostream>
using namespace std;
template< int n >
struct factorial { enum { ret = factorial< n - 1 >::ret * n }; };
template<>
struct factorial< 0 > { enum { ret = 1 }; };
int main() {
cout << "7! = " << factorial< 7 >::ret << endl; // 5040
return 0;
}
If one wants to learn more about C++ static metaprogramming, what are the best sources (books, websites, on-line courseware, whatever)?
|
[Answering my own question]
The best introductions I've found so far are chapter 10, "Static Metaprogramming in C++" from Generative Programming, Methods, Tools, and Applications by Krzysztof Czarnecki and Ulrich W. Eisenecker, ISBN-13: 9780201309775; and chapter 17, "Metaprograms" of C++ Templates: The Complete Guide by David Vandevoorder and Nicolai M. Josuttis, ISBN-13: 9780201734843.
Todd Veldhuizen has an excellent tutorial here.
A good resource for C++ programming in general is Modern C++ Design by Andrei Alexandrescu, ISBN-13: 9780201704310. This book mixes a bit of metaprogramming with other template techniques. For metaprogramming in particular, see sections 2.1 "Compile-Time Assertions", 2.4 "Mapping Integral Constants to Types", 2.6 "Type Selection", 2.7 "Detecting Convertibility and Inheritance at Compile Time", 2.9 "NullType and EmptyType" and 2.10 "Type Traits".
The best intermediate/advanced resource I've found is C++ Template Metaprogramming by David Abrahams and Aleksey Gurtovoy, ISBN-13: 9780321227256
If you'd prefer just one book, get C++ Templates: The Complete Guide since it is also the definitive reference for templates in general.
|
112,433 | 113,560 |
Should I use #define, enum or const?
|
In a C++ project I'm working on, I have a flag kind of value which can have four values. Those four flags can be combined. Flags describe the records in database and can be:
new record
deleted record
modified record
existing record
Now, for each record I wish to keep this attribute, so I could use an enum:
enum { xNew, xDeleted, xModified, xExisting }
However, in other places in code, I need to select which records are to be visible to the user, so I'd like to be able to pass that as a single parameter, like:
showRecords(xNew | xDeleted);
So, it seems I have three possible appoaches:
#define X_NEW 0x01
#define X_DELETED 0x02
#define X_MODIFIED 0x04
#define X_EXISTING 0x08
or
typedef enum { xNew = 1, xDeleted, xModified = 4, xExisting = 8 } RecordType;
or
namespace RecordType {
static const uint8 xNew = 1;
static const uint8 xDeleted = 2;
static const uint8 xModified = 4;
static const uint8 xExisting = 8;
}
Space requirements are important (byte vs int) but not crucial. With defines I lose type safety, and with enum I lose some space (integers) and probably have to cast when I want to do a bitwise operation. With const I think I also lose type safety since a random uint8 could get in by mistake.
Is there some other cleaner way?
If not, what would you use and why?
P.S. The rest of the code is rather clean modern C++ without #defines, and I have used namespaces and templates in few spaces, so those aren't out of question either.
|
Combine the strategies to reduce the disadvantages of a single approach. I work in embedded systems so the following solution is based on the fact that integer and bitwise operators are fast, low memory & low in flash usage.
Place the enum in a namespace to prevent the constants from polluting the global namespace.
namespace RecordType {
An enum declares and defines a compile time checked typed. Always use compile time type checking to make sure arguments and variables are given the correct type. There is no need for the typedef in C++.
enum TRecordType { xNew = 1, xDeleted = 2, xModified = 4, xExisting = 8,
Create another member for an invalid state. This can be useful as error code; for example, when you want to return the state but the I/O operation fails. It is also useful for debugging; use it in initialisation lists and destructors to know if the variable's value should be used.
xInvalid = 16 };
Consider that you have two purposes for this type. To track the current state of a record and to create a mask to select records in certain states. Create an inline function to test if the value of the type is valid for your purpose; as a state marker vs a state mask. This will catch bugs as the typedef is just an int and a value such as 0xDEADBEEF may be in your variable through uninitialised or mispointed variables.
inline bool IsValidState( TRecordType v) {
switch(v) { case xNew: case xDeleted: case xModified: case xExisting: return true; }
return false;
}
inline bool IsValidMask( TRecordType v) {
return v >= xNew && v < xInvalid ;
}
Add a using directive if you want to use the type often.
using RecordType ::TRecordType ;
The value checking functions are useful in asserts to trap bad values as soon as they are used. The quicker you catch a bug when running, the less damage it can do.
Here are some examples to put it all together.
void showRecords(TRecordType mask) {
assert(RecordType::IsValidMask(mask));
// do stuff;
}
void wombleRecord(TRecord rec, TRecordType state) {
assert(RecordType::IsValidState(state));
if (RecordType ::xNew) {
// ...
} in runtime
TRecordType updateRecord(TRecord rec, TRecordType newstate) {
assert(RecordType::IsValidState(newstate));
//...
if (! access_was_successful) return RecordType ::xInvalid;
return newstate;
}
The only way to ensure correct value safety is to use a dedicated class with operator overloads and that is left as an exercise for another reader.
|
112,612 | 112,737 |
Are C++ non-type parameters to (function) templates ordered?
|
I am hosting SpiderMonkey in a current project and would like to have template functions generate some of the simple property get/set methods, eg:
template <typename TClassImpl, int32 TClassImpl::*mem>
JSBool JS_DLL_CALLBACK WriteProp(JSContext* cx, JSObject* obj, jsval id, jsval* vp)
{
if (TClassImpl* pImpl = (TClassImpl*)::JS_GetInstancePrivate(cx, obj, &TClassImpl::s_JsClass, NULL))
return ::JS_ValueToInt32(cx, *vp, &(pImpl->*mem));
return JS_FALSE;
}
Used:
::JSPropertySpec Vec2::s_JsProps[] = {
{"x", 1, JSPROP_PERMANENT, &JsWrap::ReadProp<Vec2, &Vec2::x>, &JsWrap::WriteProp<Vec2, &Vec2::x>},
{"y", 2, JSPROP_PERMANENT, &JsWrap::ReadProp<Vec2, &Vec2::y>, &JsWrap::WriteProp<Vec2, &Vec2::y>},
{0}
};
This works fine, however, if I add another member type:
template <typename TClassImpl, JSObject* TClassImpl::*mem>
JSBool JS_DLL_CALLBACK WriteProp(JSContext* cx, JSObject* obj, jsval id, jsval* vp)
{
if (TClassImpl* pImpl = (TClassImpl*)::JS_GetInstancePrivate(cx, obj, &TClassImpl::s_JsClass, NULL))
return ::JS_ValueToObject(cx, *vp, &(pImpl->*mem));
return JS_FALSE;
}
Then Visual C++ 9 attempts to use the JSObject* wrapper for int32 members!
1>d:\projects\testing\jswnd\src\main.cpp(93) : error C2440: 'specialization' : cannot convert from 'int32 JsGlobal::Vec2::* ' to 'JSObject *JsGlobal::Vec2::* const '
1> Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast
1>d:\projects\testing\jswnd\src\main.cpp(93) : error C2973: 'JsWrap::ReadProp' : invalid template argument 'int32 JsGlobal::Vec2::* '
1> d:\projects\testing\jswnd\src\wrap_js.h(64) : see declaration of 'JsWrap::ReadProp'
1>d:\projects\testing\jswnd\src\main.cpp(93) : error C2440: 'initializing' : cannot convert from 'overloaded-function' to 'JSPropertyOp'
1> None of the functions with this name in scope match the target type
Surprisingly, parening JSObject* incurs a parse error! (unexpected '('). This is probably a VC++ error (can anyone test that "template void foo() {}" compiles in GCC?). Same error with "typedef JSObject* PObject; ..., PObject TClassImpl::mem>", void, struct Undefined*, and double. Since the function usage is fully instantiated: "&ReadProp", there should be no normal function overload semantics coming into play, it is a defined function at that point and gets priority over template functions. It seems the template ordering is failing here.
Vec2 is just:
class Vec2
{
public:
int32 x, y;
Vec2(JSContext* cx, JSObject* obj, uintN argc, jsval* argv);
static ::JSClass s_JsClass;
static ::JSPropertySpec s_JsProps[];
};
JSPropertySpec is described in JSAPI link in OP, taken from header:
typedef JSBool
(* JS_DLL_CALLBACK JSPropertyOp)(JSContext *cx, JSObject *obj, jsval id,
jsval *vp);
...
struct JSPropertySpec {
const char *name;
int8 tinyid;
uint8 flags;
JSPropertyOp getter;
JSPropertyOp setter;
};
|
Pretty sure VC++ has "issues" here. Comeau and g++ 4.2 are both happy with the following program:
struct X
{
int i;
void* p;
};
template<int X::*P>
void foo(X* t)
{
t->*P = 0;
}
template<void* X::*P>
void foo(X* t)
{
t->*P = 0;
}
int main()
{
X x;
foo<&X::i>(&x);
foo<&X::p>(&x);
}
VC++ 2008SP1, however, is having none of it.
I haven't the time to read through my standard to find out exactly what's what... but I think VC++ is in the wrong here.
|
112,738 | 113,134 |
How does boost bind work behind the scenes in general?
|
Without spending a long time reviewing the boost source code, could someone give me a quick rundown of how boost bind is implemented?
|
I like this piece of the bind source:
template<class R, class F, class L> class bind_t
{
public:
typedef bind_t this_type;
bind_t(F f, L const & l): f_(f), l_(l) {}
#define BOOST_BIND_RETURN return
#include <boost/bind/bind_template.hpp>
#undef BOOST_BIND_RETURN
};
Tells you almost all you need to know, really.
The bind_template header expands to a list of inline operator() definitions. For example, the simplest:
result_type operator()()
{
list0 a;
BOOST_BIND_RETURN l_(type<result_type>(), f_, a, 0);
}
We can see the BOOST_BIND_RETURN macro expands to return at this point so the line is more like return l_(type...).
The one parameter version is here:
template<class A1> result_type operator()(A1 & a1)
{
list1<A1 &> a(a1);
BOOST_BIND_RETURN l_(type<result_type>(), f_, a, 0);
}
It's pretty similar.
The listN classes are wrappers for the parameter lists. There is a lot of deep magic going on here that I don't really understand too much though. They have also overloaded operator() that calls the mysterious unwrap function. Ignoring some compiler specific overloads, it doesn't do a lot:
// unwrap
template<class F> inline F & unwrap(F * f, long)
{
return *f;
}
template<class F> inline F & unwrap(reference_wrapper<F> * f, int)
{
return f->get();
}
template<class F> inline F & unwrap(reference_wrapper<F> const * f, int)
{
return f->get();
}
The naming convention seems to be: F is the type of the function parameter to bind. R is the return type. L tends to be a list of parameter types. There are also a lot of complications because there are no less than nine overloads for different numbers of parameters. Best not to dwell on that too much.
|
112,831 | 112,834 |
How to get a stack trace when C++ program crashes? (using msvc8/2005)
|
Sometimes my c++ program crashes in debug mode, and what I got is a message box saying that an assertion failed in some of the internal memory management routines (accessing unallocated memory etc.). But I don't know where that was called from, because I didn't get any stack trace. How do I get a stack trace or at least see where it fails in my code (instead of library/ built-in routines)?
|
If you have a crash, you can get information about where the crash happened whether you have a debug or a release build. And you can see the call stack even if you are on a computer that does not have the source code.
To do this you need to use the PDB file that was built with your EXE. Put the PDB file inside the same directory as the EXE that crashed. Note: Even if you have the same source code, building twice and using the first EXE and the second PDB won't work. You need to use the exact PDB that was built with your EXE.
Then attach a debugger to the process that crashed. Example: windbg or VS.
Then simply checkout your call stack, while also having your threads window open. You will have to select the thread that crashed and check on the callstack for that thread. Each thread has a different call stack.
If you already have your VS debugger attached, it will automatically go to the source code that is causing the crash for you.
If the crash is happening inside a library you are using that you don't have the PDB for. There is nothing you can do.
|
112,897 | 113,073 |
Determining the size of a file larger than 4GB
|
The code currently does this and the fgetpos does handle files larger than 4GB but the seek returns an error, so any idea how to seek to the end of a file > 4GB?
fpos_t currentpos;
sok=fseek(fp,0,SEEK_END);
assert(sok==0,"Seek error!");
fgetpos(fp,¤tpos);
m_filesize=currentpos;
|
If you're in Windows, you want GetFileSizeEx (MSDN). The return value is a 64bit int.
On linux stat64 (manpage) is correct. fstat if you're working with a FILE*.
|
112,946 | 112,961 |
Accessing files across the windows network with near MAX_PATH length
|
I'm using C++ and accessing a UNC path across the network. This path is slightly greater than MAX_PATH. So I cannot obtain a file handle.
But if I run the program on the computer in question, the path is not greater than MAX_PATH. So I can get a file handle. If I rename the file to have less characters (minus length of computer name) I can access the file.
Can this file be accessed across the network even know the computer name in the UNC path puts it over the MAX_PATH limit?
|
I recall that there is some feature like using \\?\ at the start of the path to get around the MAX_PATH limit. Here is a reference on MSDN:
http://msdn.microsoft.com/en-us/library/aa365247(VS.85).aspx
For remote machines, you would use a path name such as: \\?\unc\server\share\path\file. The \\?\unc\ is the special prefix and is not used as part of the actual filename.
|
113,033 | 113,039 |
How to use one object's method to update another object's attribute?
|
I have three (C++) classes: Player, Hand, and Card.
Player has a member, hand, that holds a Hand. It also has a method, getHand(), that returns the contents of hand.
Hand Player::getHand() {
return hand;
}
Hand has a method, addCard(Card c), that adds a card to the hand.
I want to do this:
player1.getHand().addCard(c);
but it doesn't work. It doesn't throw an error, so it's doing something. But if I examine the contents of player1's hand afterward, the card hasn't been added.
How can I get this to work?
|
If getHand() is not returning a reference you will be in trouble.
|
113,150 | 113,155 |
What is the cost of using a pointer to member function vs. a switch?
|
I have the following situation:
class A
{
public:
A(int whichFoo);
int foo1();
int foo2();
int foo3();
int callFoo(); // cals one of the foo's depending on the value of whichFoo
};
In my current implementation I save the value of whichFoo in a data member in the constructor and use a switch in callFoo() to decide which of the foo's to call. Alternatively, I can use a switch in the constructor to save a pointer to the right fooN() to be called in callFoo().
My question is which way is more efficient if an object of class A is only constructed once, while callFoo() is called a very large number of times. So in the first case we have multiple executions of a switch statement, while in the second there is only one switch, and multiple calls of a member function using the pointer to it. I know that calling a member function using a pointer is slower than just calling it directly. Does anybody know if this overhead is more or less than the cost of a switch?
Clarification: I realize that you never really know which approach gives better performance until you try it and time it. However, in this case I already have approach 1 implemented, and I wanted to find out if approach 2 can be more efficient at least in principle. It appears that it can be, and now it makes sense for me to bother to implement it and try it.
Oh, and I also like approach 2 better for aesthetic reasons. I guess I am looking for a justification to implement it. :)
|
How sure are you that calling a member function via a pointer is slower than just calling it directly? Can you measure the difference?
In general, you should not rely on your intuition when making performance evaluations. Sit down with your compiler and a timing function, and actually measure the different choices. You may be surprised!
More info: There is an excellent article Member Function Pointers and the Fastest Possible C++ Delegates which goes into very deep detail about the implementation of member function pointers.
|
113,288 | 113,294 |
Multiple services from the same executable
|
I've written a small service (plain Win32) and I'd like to know if it's possible to run multiple instances of it when multiple users are logged on.
Basically, let's say we've got UserA and UserB for UserA the service would log on as "domain\UserA" and for UserB the service would log on as "domain\UserB" - this is from the same executable of course. I can change the logon dynamically using the ChangeServiceConfig() function, but it changes it system-wide it seems, while I'd like each user to have his own copy of the service running only for him.
Thank you in advance for any pointers.
|
Win32 services are designed to be system-wide, and start running before any user is logged in. If you want something to run on a per-user basis, it's probably better to design it as a regular application and run it from the user's Startup group.
|
113,385 | 113,399 |
Declare an object even before that class is created
|
Is there anyway to declare an object of a class before the class is created in C++? I ask because I am trying to use two classes, the first needs to have an instance of the second class within it, but the second class also contains an instance of the first class. I realize that you may think I might get into an infinite loop, but I actually need to create and instance of the second class before the first class.
|
You can't do something like this:
class A {
B b;
};
class B {
A a;
};
The most obvious problem is the compiler doesn't know how to large it needs to make class A, because the size of B depends on the size of A!
You can, however, do this:
class B; // this is a "forward declaration"
class A {
B *b;
};
class B {
A a;
};
Declaring class B as a forward declaration allows you to use pointers (and references) to that class without yet having the whole class definition.
|
113,424 | 113,442 |
Converting Reverse Polish Notation
|
Is there any way to interpret Reverse Polish Notation into "normal" mathematical notation when using either C++ or C#? I work for an engineering firm, so they use RPN occasionally and we need a way to convert it. Any suggestions?
|
Yes. Think of how a RPN calculator works. Now, instead of calculating the value, instead you add the operation to the tree. So, for example, 2 3 4 + *, when you get to the +, then rather than putting 7 on the stack, you put (+ 3 4) on the stack. And similarly when you get to the * (your stack will look like 2 (+ 3 4) * at that stage), it becomes (* 2 (+ 3 4)).
This is prefix notation, which you then have to convert to infix. Traverse the tree left-to-right, depth first. For each "inner level", if the precedence of the operator is lower, you will have to place the operation in brackets. Here, then, you will say, 2 * (3 + 4), because the + has lower precedence than *.
Hope this helps!
Edit: There's a subtlety (apart from not considering unary operations in the above): I assumed left-associative operators. For right-associative (e.g., **), then you get different results for 2 3 4 ** ** ⇒ (** 2 (** 3 4)) versus 2 3 ** 4 ** ⇒ (** (** 2 3) 4).
When reconstructing infix from the tree, both cases show that the precedence doesn't require bracketing, but in reality the latter case needs to be bracketed ((2 ** 3) ** 4). So, for right-associative operators, the left-hand branch needs to be higher-precedence (instead of higher-or-equal) to avoid bracketing.
Also, further thoughts are that you need brackets for the right-hand branch of - and / operators too.
|
113,818 | 113,879 |
Class library with support for several persistence strategies
|
I am developing a C++ class library containing domain model classes, and I would like to add support for instantiating these classes from various persistence mechanisms, i.e. databases and file. The user of the class library should be given an interface(?) against which to program a class that can transfer the data from/to the persistence mechanism.
I know of the Data Access Object pattern which seems to work for Java, but I'm not exactly sure how to apply it to C++. Are there other solutions?
|
C++ supports multiple inheritance so you can have a generic persistence API and inherit a persistence mechanism. This would still have to use introspection to get out the class metadata, but you would still have this issue with any persistence layer.
Alternatively you could do something similar but use the metadata to drive a code generator that fills in the 'Getters' and 'Setters' for the persistence layer.
Any persistence layer will typically use one or the other approach, so your problem is hooking the loading mechanism into the persistence layer. I think this makes your problem little different from a single persistence layer but tackling it from the other direction. Rather than building domain classes onto a persistence framework you are providing a set of domain classes with the hooks for a persistence framework that third parties can plug their data access mechanism into.
I think that once you provide access to class metadata and callbacks the perisistence mechanism is relatively straightforward. Look at the metadata components of any convenient C++ O/R mapping framework and understand how they work. Encapsulate this with an API in one of the base classes of your domain classes and provide a generic getter/setter API for instantiation or persisting. The rest is up to the person implementing the persistence layer.
Edit: I can't think of a C++ library with the type of pluggable persistence mechanism you're describing, but I did something in Python that could have had this type of facility added. The particular implementation used facilities in Python with no direct C++ equivalent, although the basic principle could probably be adapted to work with C++.
In Python, you can intercept accesses to instance variables by overriding __getattr()__ and __setattr()__. The persistence mechanism actually maintained its own data cache behind the scenes. When the functionality was mixed into the class (done through multiple inheritance), it overrode the default system behaviour for member accessing and checked whether the attribute being queried matched anything in its dictionary. Where this happened, the call was redirected to get or set an item in the data cache.
The cache had metadata of its own. It was aware of relationships between entities within its data model, and knew which attribute names to intercept to access data. The way this worked separated it from the database access layer and could (at least in theory) have allowed the persistence mechanism to be used with different drivers. There is no inherent reason that you couldn't have (for example) built a driver that serialised it out to an XML file.
Making something like this work in C++ would be a bit more fiddly, and it may not be possible to make the object cache access as transparent as it was with this system. You would probably be best with an explicit protocol that loads and flushes the object's state to the cache. The code to this would be quite amenable to generation from the cache metadata, but this would have to be done at compile time. You may be able to do something with templates or by overriding the -> operator to make the access protocol more transparent, but this is probably more trouble than it's worth.
|
113,830 | 113,843 |
Performance penalty for working with interfaces in C++?
|
Is there a runtime performance penalty when using interfaces (abstract base classes) in C++?
|
Short Answer: No.
Long Answer:
It is not the base class or the number of ancestors a class has in its hierarchy that affects it speed. The only thing is the cost of a method call.
A non virtual method call has a cost (but can be inlined)
A virtual method call has a slightly higher cost as you need to look up the method to call before you call it (but this is a simple table look up not a search). Since all methods on an interface are virtual by definition there is this cost.
Unless you are writing some hyper speed sensitive application this should not be a problem. The extra clarity that you will recieve from using an interface usually makes up for any perceived speed decrease.
|
113,992 | 113,995 |
C++ Binary operators order of precedence
|
In what order are the following parameters tested (in C++)?
if (a || b && c)
{
}
I've just seen this code in our application and I hate it, I want to add some brackets to just clarify the ordering. But I don't want to add the brackets until I know I'm adding them in the right place.
Edit: Accepted Answer & Follow Up
This link has more information, but it's not totally clear what it means. It seems || and && are the same precedence, and in that case, they are evaluated left-to-right.
http://msdn.microsoft.com/en-us/library/126fe14k.aspx
|
From here:
a || (b && c)
This is the default precedence.
|
114,085 | 114,102 |
Fast String Hashing Algorithm with low collision rates with 32 bit integer
|
I have lots of unrelated named things that I'd like to do quick searches against. An "aardvark" is always an "aardvark" everywhere, so hashing the string and reusing the integer would work well to speed up comparisons. The entire set of names is unknown (and changes over time). What is a fast string hashing algorithm that will generate small (32 or 16) bit values and have a low collision rate?
I'd like to see an optimized implementation specific to C/C++.
|
One of the FNV variants should meet your requirements. They're fast, and produce fairly evenly distributed outputs.
|
114,180 | 114,189 |
Pointer vs. Reference
|
What would be better practice when giving a function the original variable to work with:
unsigned long x = 4;
void func1(unsigned long& val) {
val = 5;
}
func1(x);
or:
void func2(unsigned long* val) {
*val = 5;
}
func2(&x);
IOW: Is there any reason to pick one over another?
|
My rule of thumb is:
Use pointers if you want to do pointer arithmetic with them (e.g. incrementing the pointer address to step through an array) or if you ever have to pass a NULL-pointer.
Use references otherwise.
|
114,211 | 114,221 |
Boost shared_ptr container question
|
Let's say I have a container (std::vector) of pointers used by a multi-threaded application. When adding new pointers to the container, the code is protected using a critical section (boost::mutex). All well and good. The code should be able to return one of these pointers to a thread for processing, but another separate thread could choose to delete one of these pointers, which might still be in use. e.g.:
thread1()
{
foo* p = get_pointer();
...
p->do_something();
}
thread2()
{
foo* p = get_pointer();
...
delete p;
}
So thread2 could delete the pointer whilst thread1 is using it. Nasty.
So instead I want to use a container of Boost shared ptrs. IIRC these pointers will be reference counted, so as long as I return shared ptrs instead of raw pointers, removing one from the container WON'T actually free it until the last use of it goes out of scope. i.e.
std::vector<boost::shared_ptr<foo> > my_vec;
thread1()
{
boost::shared_ptr<foo> sp = get_ptr[0];
...
sp->do_something();
}
thread2()
{
boost::shared_ptr<foo> sp = get_ptr[0];
...
my_vec.erase(my_vec.begin());
}
boost::shared_ptr<foo> get_ptr(int index)
{
lock_my_vec();
return my_vec[index];
}
In the above example, if thread1 gets the pointer before thread2 calls erase, will the object pointed to still be valid? It won't actually be deleted when thread1 completes? Note that access to the global vector will be via a critical section.
I think this is how shared_ptrs work but I need to be sure.
|
For the threading safety of boost::shared_ptr you should check this link. It's not guarantied to be safe, but on many platforms it works. Modifying the std::vector is not safe AFAIK.
|
114,238 | 114,264 |
Difference between managed C++ and C++
|
The second question is: When do I use what of these two?
|
When not specified, C++ is unmanaged C++, compiled to machine code. In unmanaged C++ you must manage memory allocation manually.
Managed C++ is a language invented by Microsoft, that compiles to bytecode run by the .NET Framework. It uses mostly the same syntax as C++ (hence the name) but is compiled in the same way as C# or VB.NET; basically only the syntax changes, e.g. using '->' to point to a member of an object (instead of '.' in C#), using '::' for namespaces, etc.
Managed C++ was made to ease transition from classic C++ to the .NET Framework. It is not intended to be used to start new projects (C# is preferred).
|
114,819 | 114,883 |
Getting a vector<Derived*> into a function that expects a vector<Base*>
|
Consider these classes.
class Base
{
...
};
class Derived : public Base
{
...
};
this function
void BaseFoo( std::vector<Base*>vec )
{
...
}
And finally my vector
std::vector<Derived*>derived;
I want to pass derived to function BaseFoo, but the compiler doesn't let me. How do I solve this, without copying the whole vector to a std::vector<Base*>?
|
vector<Base*> and vector<Derived*> are unrelated types, so you can't do this. This is explained in the C++ FAQ here.
You need to change your variable from a vector<Derived*> to a vector<Base*> and insert Derived objects into it.
Also, to avoid copying the vector unnecessarily, you should pass it by const-reference, not by value:
void BaseFoo( const std::vector<Base*>& vec )
{
...
}
Finally, to avoid memory leaks, and make your code exception-safe, consider using a container designed to handle heap-allocated objects, e.g:
#include <boost/ptr_container/ptr_vector.hpp>
boost::ptr_vector<Base> vec;
Alternatively, change the vector to hold a smart pointer instead of using raw pointers:
#include <memory>
std::vector< std::shared_ptr<Base*> > vec;
or
#include <boost/shared_ptr.hpp>
std::vector< boost::shared_ptr<Base*> > vec;
In each case, you would need to modify your BaseFoo function accordingly.
|
114,874 | 114,903 |
How to determine the value of socket listen() backlog parameter?
|
How should I determine what to use for a listening socket's backlog parameter? Is it a problem to simply specify a very large number?
|
From the docs:
A value for the backlog of SOMAXCONN is a special constant that instructs the underlying service provider responsible for socket s to set the length of the queue of pending connections to a maximum reasonable value.
|
115,115 | 115,157 |
Test Automation with Embedded Hardware
|
Has anyone had success automating testing directly on embedded hardware?
Specifically, I am thinking of automating a battery of unit tests for hardware layer modules. We need to have greater confidence in our hardware layer code. A lot of our projects use interrupt driven timers, ADCs, serial io, serial SPI devices (flash memory) etc..
Is this even worth the effort?
We typically target:
Processor: 8 or 16 bit microcontrollers (some DSP stuff)
Language: C (sometimes c++).
|
Sure. In the automotive industry we use $100,000 custom built testers for each new product to verify the hardware and software are operating correctly.
The developers, however, also build a cheaper (sub $1,000) tester that includes a bunch of USB I/O, A/D, PWM in/out, etc and either use scripting on the workstation, or purpose built HIL/SIL test software such as MxVDev.
Hardware in the Loop (HIL) testing is probably what you mean, and it simply involves some USB hardware I/O connected to the I/O of your device, with software on the computer running tests against it.
Whether it's worth it depends.
In the high reliability industry (airplane, automotive, etc) the customer specifies very extensive hardware testing, so you have to have it just to get the bid.
In the consumer industry, with non complex projects it's usually not worth it.
With any project where there's more than a few programmers involved, though, it's really nice to have a nightly regression test run on the hardware - it's hard to correctly simulate the hardware to the degree needed to satisfy yourself that the software testing is enough.
The testing then shows immediately when a problem has entered the build.
Generally you perform both black box and white box testing - you have diagnostic code running on the device that allows you to spy on signals and memory in the hardware (which might just be a debugger, or might be code you wrote that reacts to messages on a bus, for instance). This would be white box testing where you can see what's happening internally (and even cause some things to happen, such as critical memory errors which can't be tested without introducing the error yourself).
We also run a bunch of 'black box' tests where the diagnostic path is ignored and only the I/O is stimulated/read.
For a much cheaper setup, you can get $100 microcontroller boards with USB and/or ethernet (such as the Atmel UC3 family) which you can connect to your device and run basic testing.
It's especially useful for product maintenance - when the project is done, store a few working boards, the tester, and a complete set of software on CD. When you need to make a modification or debug a problem, it's easy to set it all back up and work on it with some knowledge (after testing) that the major functionality was not affected by your changes.
-Adam
|
115,703 | 115,735 |
Storing C++ template function definitions in a .CPP file
|
I have some template code that I would prefer to have stored in a CPP file instead of inline in the header. I know this can be done as long as you know which template types will be used. For example:
.h file
class foo
{
public:
template <typename T>
void do(const T& t);
};
.cpp file
template <typename T>
void foo::do(const T& t)
{
// Do something with t
}
template void foo::do<int>(const int&);
template void foo::do<std::string>(const std::string&);
Note the last two lines - the foo::do template function is only used with ints and std::strings, so those definitions mean the app will link.
My question is - is this a nasty hack or will this work with other compilers/linkers? I am only using this code with VS2008 at the moment but will be wanting to port to other environments.
|
The problem you describe can be solved by defining the template in the header, or via the approach you describe above.
I recommend reading the following points from the C++ FAQ Lite:
Why can’t I separate the definition of my templates class from its declaration and put it inside a .cpp file?
How can I avoid linker errors with my template functions?
How does the C++ keyword export help with template linker errors?
They go into a lot of detail about these (and other) template issues.
|
116,002 | 116,049 |
Pointers and containers
|
We all know that RAW pointers need to be wrapped in some form of smart pointer to get Exception safe memory management. But when it comes to containers of pointers the issue becomes more thorny.
The std containers insist on the contained object being copyable so this rules out the use of std::auto_ptr, though you can still use boost::shared_ptr etc.
But there are also some boost containers designed explicitly to hold pointers safely:
See Pointer Container Library
The question is:
Under what conditions should I prefer to use the ptr_containers over a container of smart_pointers?
boost::ptr_vector<X>
or
std::vector<boost::shared_ptr<X> >
|
Boost pointer containers have strict ownership over the resources they hold. A std::vector<boost::shared_ptr<X>> has shared ownership. There are reasons why that may be necessary, but in case it isn't, I would default to boost::ptr_vector<X>. YMMV.
|
116,469 | 116,510 |
Cleaning a string of punctuation in C++
|
Ok so before I even ask my question I want to make one thing clear. I am currently a student at NIU for Computer Science and this does relate to one of my assignments for a class there. So if anyone has a problem read no further and just go on about your business.
Now for anyone who is willing to help heres the situation. For my current assignment we have to read a file that is just a block of text. For each word in the file we are to clear any punctuation in the word (ex : "can't" would end up as "can" and "that--to" would end up as "that" obviously with out the quotes, quotes were used just to specify what the example was).
The problem I've run into is that I can clean the string fine and then insert it into the map that we are using but for some reason with the code I have written it is allowing an empty string to be inserted into the map. Now I've tried everything that I can come up with to stop this from happening and the only thing I've come up with is to use the erase method within the map structure itself.
So what I am looking for is two things, any suggestions about how I could a) fix this with out simply just erasing it and b) any improvements that I could make on the code I already have written.
Here are the functions I have written to read in from the file and then the one that cleans it.
Note: the function that reads in from the file calls the clean_entry function to get rid of punctuation before anything is inserted into the map.
Edit: Thank you Chris. Numbers are allowed :). If anyone has any improvements to the code I've written or any criticisms of something I did I'll listen. At school we really don't get feed back on the correct, proper, or most efficient way to do things.
int get_words(map<string, int>& mapz)
{
int cnt = 0; //set out counter to zero
map<string, int>::const_iterator mapzIter;
ifstream input; //declare instream
input.open( "prog2.d" ); //open instream
assert( input ); //assure it is open
string s; //temp strings to read into
string not_s;
input >> s;
while(!input.eof()) //read in until EOF
{
not_s = "";
clean_entry(s, not_s);
if((int)not_s.length() == 0)
{
input >> s;
clean_entry(s, not_s);
}
mapz[not_s]++; //increment occurence
input >>s;
}
input.close(); //close instream
for(mapzIter = mapz.begin(); mapzIter != mapz.end(); mapzIter++)
cnt = cnt + mapzIter->second;
return cnt; //return number of words in instream
}
void clean_entry(const string& non_clean, string& clean)
{
int i, j, begin, end;
for(i = 0; isalnum(non_clean[i]) == 0 && non_clean[i] != '\0'; i++);
begin = i;
if(begin ==(int)non_clean.length())
return;
for(j = begin; isalnum(non_clean[j]) != 0 && non_clean[j] != '\0'; j++);
end = j;
clean = non_clean.substr(begin, (end-begin));
for(i = 0; i < (int)clean.size(); i++)
clean[i] = tolower(clean[i]);
}
|
The problem with empty entries is in your while loop. If you get an empty string, you clean the next one, and add it without checking. Try changing:
not_s = "";
clean_entry(s, not_s);
if((int)not_s.length() == 0)
{
input >> s;
clean_entry(s, not_s);
}
mapz[not_s]++; //increment occurence
input >>s;
to
not_s = "";
clean_entry(s, not_s);
if((int)not_s.length() > 0)
{
mapz[not_s]++; //increment occurence
}
input >>s;
EDIT: I notice you are checking if the characters are alphanumeric. If numbers are not allowed, you may need to revisit that area as well.
|
116,485 | 116,636 |
Fastest way to see how many bytes are equal between fixed length arrays
|
I have 2 arrays of 16 elements (chars) that I need to "compare" and see how many elements are equal between the two.
This routine is going to be used millions of times (a usual run is about 60 or 70 million times), so I need it to be as fast as possible. I'm working on C++ (C++Builder 2007, for the record)
Right now, I have a simple:
matches += array1[0] == array2[0];
repeated 16 times (as profiling it appears to be 30% faster than doing it with a for loop)
Is there any other way that could work faster?
Some data about the environment and the data itself:
I'm using C++Builder, which doesn't have any speed optimizations to take into account. I will try eventually with another compiler, but right now I'm stuck with this one.
The data will be different most of the times. 100% equal data is usually very very rare (maybe less than 1%)
|
UPDATE: This answer has been modified to make my comments match the source code provided below.
There is an optimization available if you have the capability to use SSE2 and popcnt instructions.
16 bytes happens to fit nicely in an SSE register. Using c++ and assembly/intrinsics, load the two 16 byte arrays into xmm registers, and cmp them. This generates a bitmask representing the true/false condition of the compare. You then use a movmsk instruction to load a bit representation of the bitmask into an x86 register; this then becomes a bit field where you can count all the 1's to determine how many true values you had. A hardware popcnt instruction can be a fast way to count all the 1's in a register.
This requires knowledge of assembly/intrinsics and SSE in particular. You should be able to find web resources for both.
If you run this code on a machine that does not support either SSE2 or popcnt, you must then iterate through the arrays and count the differences with your unrolled loop approach.
Good luck
Edit:
Since you indicated you did not know assembly, here's some sample code to illustrate my answer:
#include "stdafx.h"
#include <iostream>
#include "intrin.h"
inline unsigned cmpArray16( char (&arr1)[16], char (&arr2)[16] )
{
__m128i first = _mm_loadu_si128( reinterpret_cast<__m128i*>( &arr1 ) );
__m128i second = _mm_loadu_si128( reinterpret_cast<__m128i*>( &arr2 ) );
return _mm_movemask_epi8( _mm_cmpeq_epi8( first, second ) );
}
int _tmain( int argc, _TCHAR* argv[] )
{
unsigned count = 0;
char arr1[16] = { 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0 };
char arr2[16] = { 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0 };
count = __popcnt( cmpArray16( arr1, arr2 ) );
std::cout << "The number of equivalent bytes = " << count << std::endl;
return 0;
}
Some notes: This function uses SSE2 instructions and a popcnt instruction introduced in the Phenom processor (that's the machine that I use). I believe the most recent Intel processors with SSE4 also have popcnt. This function does not check for instruction support with CPUID; the function is undefined if used on a processor that does not have SSE2 or popcnt (you will probably get an invalid opcode instruction). That detection code is a separate thread.
I have not timed this code; the reason I think it's faster is because it compares 16 bytes at a time, branchless. You should modify this to fit your environment, and time it yourself to see if it works for you. I wrote and tested this on VS2008 SP1.
SSE prefers data that is aligned on a natural 16-byte boundary; if you can guarantee that then you should get additional speed improvements, and you can change the _mm_loadu_si128 instructions to _mm_load_si128, which requires alignment.
|
116,646 | 116,714 |
data access object pattern implementation
|
I would like to implement a data access object pattern in C++, but preferably without using multiple inheritance and/or boost (which my client does not like).
Do you have any suggestions?
|
OTL (otl.sourceforge.net) is an excellent C++ database library. It's a single include file so doesn't have all the complexity associated (rightly or wrongly!) with Boost.
In terms of the DAO itself, you have many options. The simplest that hides the database implementation is just to use C++ style interfaces and implement the data access layer in a particular implementation.
class MyDAO {
// Pure virtual functions to access the data itself
}
class MyDAOImpl : public MyDAO {
// Implementations to get the data from the database
}
|
116,687 | 116,741 |
Problem Linking "static" Methods in C++
|
I want to call a few "static" methods of a CPP class defined in a different file but I'm having linking problems. I created a test-case that recreates my problem and the code for it is below.
(I'm completely new to C++, I come from a Java background and I'm a little familiar with C.)
// CppClass.cpp
#include <iostream>
#include <pthread.h>
static pthread_t thread;
static pthread_mutex_t mutex;
static pthread_cond_t cond;
static int shutdown;
using namespace std;
class CppClass
{
public:
static void Start()
{
cout << "Testing start function." << endl;
shutdown = 0;
pthread_attr_t attr;
pthread_attr_init(&attr);
pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE);
pthread_mutex_init(&mutex, NULL);
pthread_cond_init(&cond, NULL);
pthread_create(&thread, &attr, run_thread, NULL);
}
static void Stop()
{
pthread_mutex_lock(&mutex);
shutdown = 1;
pthread_cond_broadcast(&cond);
pthread_mutex_unlock(&mutex);
}
static void Join()
{
pthread_join(thread, NULL);
}
private:
static void *run_thread(void *pthread_args)
{
CppClass *obj = new CppClass();
pthread_mutex_lock(&mutex);
while (shutdown == 0)
{
struct timespec ts;
ts.tv_sec = time(NULL) + 3;
pthread_cond_timedwait(&cond, &mutex, &ts);
if (shutdown)
{
break;
}
obj->display();
}
pthread_mutex_unlock(&mutex);
pthread_mutex_destroy(&mutex);
pthread_cond_destroy(&cond);
pthread_exit(NULL);
return NULL;
}
void display()
{
cout << " Inside display() " << endl;
}
};
// main.cpp
#include <iostream>
/*
* If I remove the comment below and delete the
* the class declaration part, it works.
*/
// #include "CppClass.cpp"
using namespace std;
class CppClass
{
public:
static void Start();
static void Stop();
static void Join();
};
int main()
{
CppClass::Start();
while (1)
{
int quit;
cout << "Do you want to end?: (0 = stay, 1 = quit) ";
cin >> quit;
cout << "Input: " << quit << endl;
if (quit)
{
CppClass::Stop();
cout << "Joining CppClass..." << endl;
CppClass::Join();
break;
}
}
}
When I tried to compile, I get the following error:
$ g++ -o go main.cpp CppClass.cpp -l pthread
/tmp/cclhBttM.o(.text+0x119): In function `main':
: undefined reference to `CppClass::Start()'
/tmp/cclhBttM.o(.text+0x182): In function `main':
: undefined reference to `CppClass::Stop()'
/tmp/cclhBttM.o(.text+0x1ad): In function `main':
: undefined reference to `CppClass::Join()'
collect2: ld returned 1 exit status
But if I remove the class declaration in main.cpp and replace it with #include "CppClass.cpp", it works fine. Basically, I want to put these declarations in a separate .h file and use it. Am I missing something?
Thanks for the help.
|
It's obvious you come from a Java background because you haven't yet grasped the concept of header files. In Java the process of defining something is usually in one piece. You declare and define at the same time. In C/C++ it's a two-step process. Declaring something tells the compiler "something exists with this type, but I'll tell you later how it is actually implemented". Defining something is giving the compiler the actual implementation part. Header files are used mostly for declarations, .cpp files for definitions.
Header files are there to describe the "API" of classes, but not their actual code. It is possible to include code in the header, that's called header-inlining. You have inlined everything in CppClass.cpp (not good, header-inlining should be the exception), and then you declare your class in main.cpp AGAIN which is a double declaration in C++. The inlining in the class body leads to code reduplication everytime you use a method (this only sounds insane. See the C++ faq section on inlining for details.)
Including the double declaration in your code gives you a compiler error. Leaving the class code out compiles but gives you a linker error because now you only have the header-like class declaration in main.cpp. The linker sees no code that implements your class methods, that's why the errors appear. Different to Java, the C++ linker will NOT automatically search for object files it wants to use. If you use class XYZ and don't give it object code for XYZ, it will simply fail.
Please have a look at Wikipedia's header file article and Header File Include Patterns (the link is also at the bottom of the Wikipedia article and contains more examples)
In short:
For each class, generate a NewClass.h and NewClass.cpp file.
In the NewClass.h file, write:
class NewClass {
public:
NewClass();
int methodA();
int methodB();
}; <- don't forget the semicolon
In the NewClass.cpp file, write:
#include "NewClass.h"
NewClass::NewClass() {
// constructor goes here
}
int NewClass::methodA() {
// methodA goes here
return 0;
}
int NewClass::methodB() {
// methodB goes here
return 1;
}
In main.cpp, write:
#include "NewClass.h"
int main() {
NewClass nc;
// do something with nc
}
To link it all together, do a
g++ -o NewClassExe NewClass.cpp main.cpp
(just an example with gcc)
|
117,110 | 117,145 |
When have we any practical use for hierarchical namespaces in c++?
|
I can understand the use for one level of namespaces. But 3 levels of namespaces. Looks insane. Is there any practical use for that? Or is it just a misconception?
|
Hierarchical namespaces do have a use in that they allow progressively more refined definitions. Certainly a single provider may produce two classes with the same name. Often the first level is occupied by the company name, the second specifies the product, the third (and possibly more) my provide the domain.
There are also other uses of namespace segregation. One popular situation is placing the base classes for a factory pattern in its own namespace and then derived factories in their own namespaces by provider. E.g. System.Data, System.Data.SqlClient and System.Data.OleDbClient.
|
117,293 | 117,308 |
Use of 'const' for function parameters
|
How far do you go with const? Do you just make functions const when necessary or do you go the whole hog and use it everywhere? For example, imagine a simple mutator that takes a single boolean parameter:
void SetValue(const bool b) { my_val_ = b; }
Is that const actually useful? Personally I opt to use it extensively, including parameters, but in this case I wonder if it's worthwhile?
I was also surprised to learn that you can omit const from parameters in a function declaration but can include it in the function definition, e.g.:
.h file
void func(int n, long l);
.cpp file
void func(const int n, const long l)
Is there a reason for this? It seems a little unusual to me.
|
The reason is that const for the parameter only applies locally within the function, since it is working on a copy of the data. This means the function signature is really the same anyways. It's probably bad style to do this a lot though.
I personally tend to not use const except for reference and pointer parameters. For copied objects it doesn't really matter, although it can be safer as it signals intent within the function. It's really a judgement call. I do tend to use const_iterator though when looping on something and I don't intend on modifying it, so I guess to each his own, as long as const correctness for reference types is rigorously maintained.
|
117,693 | 118,095 |
How to initialize Pango under Win32?
|
Having downloaded Pango and GLib from the GTK+ Project's Win32 downloads page and having created and configured a Win32 project under Visual Studio 2005 so it points to the proper lib and include directories, how do you initialize Pango for rendering to a Win32 window?
Should the first call be to pango_win32_get_context()? Calling that function causes the application to hang on that call, as the function never returns.
What should be the first call? What other calls are needed to initialize Pango for Win32 and render a simple text string? Are there any examples available online for rendering with Pango under Win32?
|
Pango is a GObject based library. As such, you need to make sure that the glib dynamic type system is initialized before using any of its functionality. This can be done by calling g_type_init() (either directly or indirectly via something like gtk_init()). Could this be your problem?
|
117,708 | 117,760 |
nonvirtual interface idiom for more than two levels of inheritance?
|
The non-virtual interface idiom describes how the virtual methods are nonpublic customisation points, and public methods are nonvirtual to allow the base class to control at all times how the customisation points are called.
This is an elegant idiom and I like to use it, but how does it work if the derived class is a base class in itself
|
It works, because the derived class can override a private virtual function of a base class, even if the base class function overrides its base class function.
This is perfectly legal:
class Parent
{
public:
int foo() {return bar();} // the non-virtual public interface
private
virtual int bar();
};
class Child : public Parent
{
private:
virtual int bar(); // overrides Parent::bar()
};
class Grandchild : public Child
{
private:
virtual int bar(); // overrides Child::bar();
};
|
117,755 | 118,442 |
Getting a char* from a _variant_t in optimal time
|
Here's the code I want to speed up. It's getting a value from an ADO recordset and converting it to a char*. But this is slow. Can I skip the creation of the _bstr_t?
_variant_t var = pRs->Fields->GetItem(i)->GetValue();
if (V_VT(&var) == VT_BSTR)
{
char* p = (const char*) (_bstr_t) var;
|
The first 4 bytes of the BSTR contain the length. You can loop through and get every other character if unicode or every character if multibyte. Some sort of memcpy or other method would work too. IIRC, this can be faster than W2A or casting (LPCSTR)(_bstr_t)
|
117,844 | 117,870 |
Converting string of 1s and 0s into binary value
|
I'm trying to convert an incoming sting of 1s and 0s from stdin into their respective binary values (where a string such as "11110111" would be converted to 0xF7). This seems pretty trivial but I don't want to reinvent the wheel so I'm wondering if there's anything in the C/C++ standard libs that can already perform such an operation?
|
#include <stdio.h>
#include <stdlib.h>
int main(void) {
char * ptr;
long parsed = strtol("11110111", & ptr, 2);
printf("%lX\n", parsed);
return EXIT_SUCCESS;
}
For larger numbers, there as a long long version, strtoll.
|
118,199 | 118,311 |
C++ Thread, shared data
|
I have an application where 2 threads are running... Is there any certanty that when I change a global variable from one thread, the other will notice this change?
I don't have any syncronization or Mutual exclusion system in place... but should this code work all the time (imagine a global bool named dataUpdated):
Thread 1:
while(1) {
if (dataUpdated)
updateScreen();
doSomethingElse();
}
Thread 2:
while(1) {
if (doSomething())
dataUpdated = TRUE;
}
Does a compiler like gcc optimize this code in a way that it doesn't check for the global value, only considering it value at compile time (because it nevers get changed at the same thred)?
PS: Being this for a game-like application, it really doen't matter if there will be a read while the value is being written... all that matters is that the change gets noticed by the other thread.
|
Yes. No. Maybe.
First, as others have mentioned you need to make dataUpdated volatile; otherwise the compiler may be free to lift reading it out of the loop (depending on whether or not it can see that doSomethingElse doesn't touch it).
Secondly, depending on your processor and ordering needs, you may need memory barriers. volatile is enough to guarentee that the other processor will see the change eventually, but not enough to guarentee that the changes will be seen in the order they were performed. Your example only has one flag, so it doesn't really show this phenomena. If you need and use memory barriers, you should no longer need volatile
Volatile considered harmful and Linux Kernel Memory Barriers are good background on the underlying issues; I don't really know of anything similar written specifically for threading. Thankfully threads don't raise these concerns nearly as often as hardware peripherals do, though the sort of case you describe (a flag indicating completion, with other data presumed to be valid if the flag is set) is exactly the sort of thing where ordering matterns...
|
118,547 | 118,606 |
Creating a ZIP file on Windows (XP/2003) in C/C++
|
I am looking for a way to create a ZIP file from a folder in Windows C/C++ APIs. I can find the way to do this in VBScript using the Shell32.Application CopyHere method, and I found a tutorial explaining how to do it in C# also, but nothing for the C API (C++ is fine too, project already uses MFC).
I'd be really grateful if anyone can share some sample C code that can successfully create a zip file on Windows XP/2003. Failing that, if someone can find solid docs or a tutorial that would be great, since MSDN searches don't turn up much. I'm really hoping to avoid having to ship a third-party lib for this, because the functionality is obviously there, I just can't figure out how to access it. Google searches turn up nothing useful, just tantalizing bits and pieces of information. Here's hoping someone in the community has sorted this out and can share it for posterity!
|
EDIT: This answer is old, but I cannot delete it because it was accepted. See the next one
https://stackoverflow.com/a/121720/3937
----- ORIGINAL ANSWER -----
There is sample code to do that here
[EDIT: Link is now broken]
http://www.eggheadcafe.com/software/aspnet/31056644/using-shfileoperation-to.aspx
Make sure you read about how to handle monitoring for the thread to complete.
Edit: From the comments, this code only works on existing zip file, but @Simon provided this code to create a blank zip file
FILE* f = fopen("path", "wb");
fwrite("\x50\x4B\x05\x06\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 22, 1, f);
fclose(f);
|
118,630 | 118,664 |
What is the best signature for overloaded arithmetic operators in C++?
|
I had assumed that the canonical form for operator+, assuming the existence of an overloaded operator+= member function, was like this:
const T operator+(const T& lhs, const T& rhs)
{
return T(lhs) +=rhs;
}
But it was pointed out to me that this would also work:
const T operator+ (T lhs, const T& rhs)
{
return lhs+=rhs;
}
In essence, this form transfers creation of the temporary from the body of the implementation to the function call.
It seems a little awkward to have different types for the two parameters, but is there anything wrong with the second form? Is there a reason to prefer one over the other?
|
With the edited question, the first form would be preferred. The compiler will more likely optimize the return value (you could verify this by placing a breakpoint in the constructor for T). The first form also takes both parameters as const, which would be more desirable.
Research on the topic of return value optimization, such as this link as a quick example: http://www.cs.cmu.edu/~gilpin/c++/performance.html
|
118,659 | 175,086 |
How do I use Qt and SDL together?
|
I am building a physics simulation engine and editor in Windows. I want to build the editor part using Qt and I want to run the engine using SDL with OpenGL.
My first idea was to build the editor using only Qt and share as much code with the engine (the resource manager, the renderer, the maths). But, I would also like to be able to run the simulation inside the editor. This means I also have to share the simulation code which uses SDL threads.
So, my question is this: Is there a way to have an the render OpenGL to a Qt window by using SDL?
I have read on the web that it might be possible to supply SDL with a window handle in which to render. Anybody has experience dong that?
Also, the threaded part of the simulator might pose a problem since it uses SDL threads.
|
While you might get it to work like first answer suggest you will likely run into problems due to threading. There is no simple solutions when it comes to threading, and here you would have SDL Qt and OpenGL mainloop interacting. Not fun.
The easiest and sanest solution would be to decouple both parts. So that SDL and Qt run in separate processes and have them use some kind of messaging to communicate (I'd recommend d-bus here ). You can have SDL render into borderless window and your editor sends commands via messages.
|
118,727 | 118,734 |
Compile errors in mshtml.h compiling with VS2008
|
I'm in the process of moving one of our projects from VS6 to VS2008 and I've hit the following compile error with mshtml.h:
1>c:\program files\microsoft sdks\windows\v6.0a\include\mshtml.h(5272) : error C2143: syntax error : missing '}' before 'constant'
1>c:\program files\microsoft sdks\windows\v6.0a\include\mshtml.h(5275) : error C2143: syntax error : missing ';' before '}'
1>c:\program files\microsoft sdks\windows\v6.0a\include\mshtml.h(5275) : error C4430: missing type specifier - int assumed. Note: C++ does not support default-int
1>c:\program files\microsoft sdks\windows\v6.0a\include\mshtml.h(28523) : error C2059: syntax error : '}'
1>c:\program files\microsoft sdks\windows\v6.0a\include\mshtml.h(28523) : error C2143: syntax error : missing ';' before '}'
1>c:\program files\microsoft sdks\windows\v6.0a\include\mshtml.h(28523) : error C2059: syntax error : '}'
Following the first error statement drops into this part of the mshtml.h code, pointing at the "True = 1" line:
EXTERN_C const GUID CLSID_CDocument;
EXTERN_C const GUID CLSID_CScriptlet;
typedef
enum _BoolValue
{ True = 1,
False = 0,
BoolValue_Max = 2147483647L
} BoolValue;
EXTERN_C const GUID CLSID_CPluginSite;
It looks like someone on expert-sexchange also came across this error but I'd rather not dignify that site with a "7 day free trial".
Any suggestions would be most welcome.
|
There is probably a #define changing something. Try running just the preprocessor on your .cpp and generating a .i file. The setting is in the project property pages.
EDIT: Also, you can get the answer from that other expert site by scrolling to the bottom of the page. They have to do that or Google will take them out of their indexes.
|
118,774 | 119,553 |
Is there a clean way to prevent windows.h from creating a near & far macro?
|
Deep down in WinDef.h there's this relic from the segmented memory era:
#define far
#define near
This obviously causes problems if you attempt to use near or far as variable names. Any clean workarounds? Other then renaming my variables?
|
You can safely undefine them, contrary to claims from others. The reason is that they're just macros's. They only affect the preprocessor between their definition and their undefinition. In your case, that will be from early in windows.h to the last line of windows.h. If you need extra windows headers, you'd include them after windows.h and before the #undef. In your code, the preprocessor will simply leave the symbols unchanged, as intended.
The comment about older code is irrelevant. That code will be in a separate library, compiled independently. Only at link time will these be connected, when macros are long gone.
|
118,945 | 118,968 |
Best C/C++ Network Library
|
I haven't done work in C/C++ for a little bit and was just wondering what people's favorite cross platform libraries are to use.
I'm looking for something that is a good quick and dirty library as well as a library that is a little more robust. Often those are two different libraries and that's okay.
|
Aggregated List of Libraries
Boost.Asio is really good.
Asio is also available as a stand-alone library.
ACE is also good, a bit more mature and has a couple of books to support it.
C++ Network Library
POCO
Qt
Raknet
ZeroMQ (C++)
nanomsg (C Library)
nng (C Library)
Berkeley Sockets
libevent
Apache APR
yield
Winsock2(Windows only)
wvstreams
zeroc
libcurl
libuv (Cross-platform C library)
SFML's Network Module
C++ Rest SDK (Casablanca)
RCF
Restbed (HTTP Asynchronous Framework)
SedNL
SDL_net
OpenSplice|DDS
facil.io (C, with optional HTTP and Websockets, Linux / BSD / macOS)
GLib Networking
grpc from Google
GameNetworkingSockets from Valve
CYSockets To do easy things in the easiest way
yojimbo
GGPO
ENet
SLikeNet is a fork of Raknet
netcode
photon is closed source, requires license to use their sdk
crossplatform network - open source non blocking metatemplate framework built on top of boost asio
|
119,098 | 119,194 |
Which C I/O library should be used in C++ code?
|
In new C++ code, I tend to use the C++ iostream library instead of the C stdio library.
I've noticed some programmers seem to stick to stdio, insisting that it's more portable.
Is this really the case? What is better to use?
|
To answer the original question:
Anything that can be done using stdio can be done using the iostream library.
Disadvantages of iostreams: verbose
Advantages of iostreams: easy to extend for new non POD types.
The step forward the C++ made over C was type safety.
iostreams was designed to be explicitly type safe. Thus assignment to an object explicitly checked the type (at compiler time) of the object being assigned too (generating an compile time error if required). Thus prevent run-time memory over-runs or writing a float value to a char object etc.
scanf()/printf() and family on the other hand rely on the programmer getting the format string correct and there was no type checking (I believe gcc has an extension that helps). As a result it was the source of many bugs (as programmers are less perfect in their analysis than compilers [not going to say compilers are perfect just better than humans]).
Just to clarify comments from Colin Jensen.
The iostream libraries have been stable since the release of the last standard (I forget the actual year but about 10 years ago).
To clarify comments by Mikael Jansson.
The other languages that he mentions that use the format style have explicit safeguards to prevent the dangerous side effects of the C stdio library that can (in C but not the mentioned languages) cause a run-time crash.
N.B. I agree that the iostream library is a bit on the verbose side. But I am willing to put up with the verboseness to ensure runtime safety. But we can mitigate the verbosity by using Boost Format Library.
#include <iostream>
#include <iomanip>
#include <boost/format.hpp>
struct X
{ // this structure reverse engineered from
// example provided by 'Mikael Jansson' in order to make this a running example
char* name;
double mean;
int sample_count;
};
int main()
{
X stats[] = {{"Plop",5.6,2}};
// nonsense output, just to exemplify
// stdio version
fprintf(stderr, "at %p/%s: mean value %.3f of %4d samples\n",
stats, stats->name, stats->mean, stats->sample_count);
// iostream
std::cerr << "at " << (void*)stats << "/" << stats->name
<< ": mean value " << std::fixed << std::setprecision(3) << stats->mean
<< " of " << std::setw(4) << std::setfill(' ') << stats->sample_count
<< " samples\n";
// iostream with boost::format
std::cerr << boost::format("at %p/%s: mean value %.3f of %4d samples\n")
% stats % stats->name % stats->mean % stats->sample_count;
}
|
119,114 | 119,148 |
Conditional compilation for working at home
|
I code C++ using MS Dev Studio and I work from home two days per week. I use CVS to keep my sources synchronized between the two computers but there are difference between the environments the machines are in.
Can anyone suggest a way I can conditionally modify constants in my code depending on whether I am compiling on my home box or not ?
What I am after is a way of defining a symbol, let's call it _ATHOME, automatically so I can do this:
#ifdef _ATHOME
# define TEST_FILES "E:\\Test"
# define TEST_SERVER "192.168.0.1"
#else
# define TEST_FILE "Z:\\Project\\Blah\\Test"
# define TEST_SERVER "212.45.68.43"
#endif
NB: This is for development and debugging purposes of course, I would never release software with hard coded constants like this.
|
On your home and work machines, set an environment variable LOCATION that is either "1" for home or "2" for work.
Then in the preprocessor options, add a preprocessor define /DLOCATION=$(LOCATION). This will evaluate to either the "home" or "work" string that you set in the environment variable.
Then in your code:
#if LOCATION==1
// home
#else
// work
#endif
|
119,123 | 119,128 |
Why isn't sizeof for a struct equal to the sum of sizeof of each member?
|
Why does the sizeof operator return a size larger for a structure than the total sizes of the structure's members?
|
This is because of padding added to satisfy alignment constraints. Data structure alignment impacts both performance and correctness of programs:
Mis-aligned access might be a hard error (often SIGBUS).
Mis-aligned access might be a soft error.
Either corrected in hardware, for a modest performance-degradation.
Or corrected by emulation in software, for a severe performance-degradation.
In addition, atomicity and other concurrency-guarantees might be broken, leading to subtle errors.
Here's an example using typical settings for an x86 processor (all used 32 and 64 bit modes):
struct X
{
short s; /* 2 bytes */
/* 2 padding bytes */
int i; /* 4 bytes */
char c; /* 1 byte */
/* 3 padding bytes */
};
struct Y
{
int i; /* 4 bytes */
char c; /* 1 byte */
/* 1 padding byte */
short s; /* 2 bytes */
};
struct Z
{
int i; /* 4 bytes */
short s; /* 2 bytes */
char c; /* 1 byte */
/* 1 padding byte */
};
const int sizeX = sizeof(struct X); /* = 12 */
const int sizeY = sizeof(struct Y); /* = 8 */
const int sizeZ = sizeof(struct Z); /* = 8 */
One can minimize the size of structures by sorting members by alignment (sorting by size suffices for that in basic types) (like structure Z in the example above).
IMPORTANT NOTE: Both the C and C++ standards state that structure alignment is implementation-defined. Therefore each compiler may choose to align data differently, resulting in different and incompatible data layouts. For this reason, when dealing with libraries that will be used by different compilers, it is important to understand how the compilers align data. Some compilers have command-line settings and/or special #pragma statements to change the structure alignment settings.
|
119,414 | 119,522 |
How would you unittest a memory allocator?
|
There's a lot of people today who sell unittesting as bread-and-butter of development. That might even work for strongly algorithmically-oriented routines. However, how would you unit-test, for example, a memory allocator (think malloc()/realloc()/free()). It's not hard to produce a working (but absolutely useless) memory allocator that satisfies the specified interface. But how to provide the proper context for unit-testing functionality that is absolutely desired, yet not part of the contract: coalescing free blocks, reusing free blocks on next allocations, returning excess free memory to the system, asserting that the allocation policy (e.g. first-fit) really is respected, etc.
My experience is that assertions, even if complex and time-consuming (e.g. traversing the whole free list to check invariants) are much less work and are more reliable than unit-testing, esp. when coding complex, time-dependent algorithms.
Any thoughts?
|
Highly testable code tends to be structured differently than other code.
You describe several tasks that you want an allocator to do:
coalescing free blocks
reusing free blocks on next
allocations
returning excess free memory to the
system
asserting that the allocation policy
(e.g. first-fit) really is respected
While you might write your allocation code to be very coupled, as in doing several of those things inside one function body, you could also break each task out into code that is a testable chunk. This is almost an inversion of what you may be used to. I find that testable code tends to be very transparent and built from more small pieces.
Next, I would say is that within reason automated testing of any sort is better than no automated testing. I would definitely focus more on making sure your tests do something useful than worrying if you've properly used mocks, whether you've ensured it's properly isolated and whether it's a true unit test. Those are all admirable goals that will hopefully make 99% of tests better. On the other hand, please use common sense and your best engineering judgment to get the job done.
Without code samples I don't think I can be more specific.
|
119,492 | 119,498 |
Difference between Visual C++ 2008 and 2005
|
I couldn't find any useful information on Microsoft's site, so here is the question: has the compiler in Visual C++ 2008 been improved significantly since the 2005 version? I'm especially looking for better optimization.
|
Straight from the horses mouth....
http://msdn.microsoft.com/en-us/library/bb384632.aspx
|
119,578 | 119,752 |
Disabling Warnings generated via _CRT_SECURE_NO_DEPRECATE
|
What is the best way to disable the warnings generated via _CRT_SECURE_NO_DEPRECATE that allows them to be reinstated with ease and will work across Visual Studio versions?
|
If you don't want to pollute your source code (after all this warning presents only with Microsoft compiler), add _CRT_SECURE_NO_WARNINGS symbol to your project settings via "Project"->"Properties"->"Configuration properties"->"C/C++"->"Preprocessor"->"Preprocessor definitions".
Also you can define it just before you include a header file which generates this warning.
You should add something like this
#ifdef _MSC_VER
#define _CRT_SECURE_NO_WARNINGS
#endif
And just a small remark, make sure you understand what this warning stands for, and maybe, if you don't intend to use other compilers than MSVC, consider using safer version of functions i.e. strcpy_s instead of strcpy.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.