Dataset Viewer
Auto-converted to Parquet
question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
25
1,443,907
How to use the C socket API in C++ on z/OS
I'm having issues getting the C sockets API to work properly in C++ on z/OS. Although I am including sys/socket.h, I still get compile time errors telling me that AF_INET is not defined. Am I missing something obvious, or is this related to the fact that being on z/OS makes my problems much more complicated? I discovered that there is an #ifdef that I'm hitting. Apparently z/OS isn't happy unless I define which "type" of sockets I'm using with: #define _OE_SOCKETS Now, I personally have no idea what this _OE_SOCKETS is actually for, so if any z/OS sockets programmers are out there (all 3 of you), perhaps you could give me a rundown of how this all works? Test App #include <sys/socket.h> int main() { return AF_INET; } Compile/Link Output: cxx -Wc,xplink -Wl,xplink -o inet_test inet.C "./inet.C", line 5.16: CCN5274 (S) The name lookup for "AF_INET" did not find a declaration. CCN0797(I) Compilation failed for file ./inet.C. Object file not created. A check of sys/sockets.h does include the definition I need, and as far as I can tell, it is not being blocked by any #ifdef statements. I have however noticed it contains the following: #ifdef __cplusplus extern "C" { #endif which encapsulates basically the whole file? Not sure if it matters.
Keep a copy of the IBM manuals handy: z/OS V1R11.0 XL C/C++ Programming Guide z/OS V1R11.0 XL C/C++ Run-Time Library Reference The IBM publications are generally very good, but you need to get used to their format, as well as knowing where to look for an answer. You'll find quite often that a feature that you want to use is guarded by a "feature test macro" You should ask your friendly system programmer to install the XL C/C++ Run-Time Library Reference: Man Pages on your system. Then you can do things like "man connect" to pull up the man page for the socket connect() API. When I do that, this is what I see: FORMAT X/Open #define _XOPEN_SOURCE_EXTENDED 1 #include <sys/socket.h> int connect(int socket, const struct sockaddr *address, socklen_t address_len); Berkeley Sockets #define _OE_SOCKETS #include <sys/types.h> #include <sys/socket.h> int connect(int socket, struct sockaddr *address, int address_len);
330
332
Should I use nested classes in this case?
I am working on a collection of classes used for video playback and recording. I have one main class which acts like the public interface, with methods like play(), stop(), pause(), record() etc... Then I have workhorse classes which do the video decoding and video encoding. I just learned about the existence of nested classes in C++, and I'm curious to know what programmers think about using them. I am a little wary and not really sure what the benefits/drawbacks are, but they seem (according to the book I'm reading) to be used in cases such as mine. The book suggests that in a scenario like mine, a good solution would be to nest the workhorse classes inside the interface class, so there are no separate files for classes the client is not meant to use, and to avoid any possible naming conflicts? I don't know about these justifications. Nested classes are a new concept to me. Just want to see what programmers think about the issue.
I would be a bit reluctant to use nested classes here. What if you created an abstract base class for a "multimedia driver" to handle the back-end stuff (workhorse), and a separate class for the front-end work? The front-end class could take a pointer/reference to an implemented driver class (for the appropriate media type and situation) and perform the abstract operations on the workhorse structure. My philosophy would be to go ahead and make both structures accessible to the client in a polished way, just under the assumption they would be used in tandem. I would reference something like a QTextDocument in Qt. You provide a direct interface to the bare metal data handling, but pass the authority along to an object like a QTextEdit to do the manipulation.
601
602
Robust Random Number Generation
I'm looking for a performant, reasonably robust RNG using no special hardware. It can use mathematical methods (Mersenne Twister, etc), it can "collect entropy" from the machine, whatever. On Linux/etc we have a drand48() which generates 48 random bits. I'd like a similar function/class for C++ or C# which can generate more than 32 bits of randomness and which low-order bits are equally as random as high-order bits. It doesn't have to be cryptographically secure but it must not use or be based on the C-language rand() or .NET System.Random. Any source code, links to source, etc. would be appreciated! Failing that, what TYPE of RNG should I be looking for?
For C++, Boost.Random is probably what you're looking for. It has support for MT (among many other algorithms), and can collect entropy via the nondet_random class. Check it out! :-)
1,041
17,198,991
Easy-to-Use Regular Expression Support in C++?
I'm looking for a robust, easy-to-use, regular expression evaluator for native C++, either platform-independent or Windows-specific. Requirements: Can't use Boost or ACE regex libraries (unfortunately) Can't use .NET Regex (or any managed code) The main requirement is that it should be standalone and open.
C++11 now includes the support of regular expressions. It will be platform independent. You just need a recent compiler. Check the following list to know which one to use. http://wiki.apache.org/stdcxx/C%2B%2B0xCompilerSupport Hope it helps
1,104
1,109
Ensuring that Exceptions are always caught
Exceptions in C++ don't need to be caught (no compile time errors) by the calling function. So it's up to developer's judgment whether to catch them using try/catch (unlike in Java). Is there a way one can ensure that the exceptions thrown are always caught using try/catch by the calling function?
No. See A Pragmatic Look at Exception Specifications for reasons why not. The only way you can "help" this is to document the exceptions your function can throw, say as a comment in the header file declaring it. This is not enforced by the compiler or anything. Use code reviews for that purpose.
1,241
1,275
Asynchronous multi-direction server-client communication over the same open socket?
I have a client-server app where the client is on a Windows Mobile 6 device, written in C++ and the server is on full Windows and written in C#. Originally, I only needed it to send messages from the client to the server, with the server only ever sending back an acknowledgement that it received the message. Now, I would like to update it so that the server can actually send a message to the client to request data. As I currently have it set up so the client is only in receive mode after it sends data to the server, this doesn't allow for the server to send a request at any time. I would have to wait for client data. My first thought would be to create another thread on the client with a separate open socket, listening for server requests...just like the server already has in respect the client. Is there a way, within the same thread and using the same socket, to all the server to send requests at any time? Can you use something to the effect of WaitForMultipleObjects() and pass it a receive buffer and an event that tells it there is data to be sent?
When I needed to write an application with a client-server model where the clients could leave and enter whenever they want, (I assume that's also the case for your application as you use mobile devices) I made sure that the clients send an online message to the server, indicating they were connected and ready to do whatever they needed doing. at that time the server could send messages back to the client trough the same open connection. Also, but I don't know if that is applicable for you, I had some sort of heartbeat the clients sent to the server, letting it know it was still online. That way the server knows when a client was forcibly disconnected from the network and it could mark that client back as offline.
1,982
47,086
CPU throttling in C++
I was just wondering if there is an elegant way to set the maximum CPU load for a particular thread doing intensive calculations. Right now I have located the most time consuming loop in the thread (it does only compression) and use GetTickCount() and Sleep() with hardcoded values. It makes sure that the loop continues for a certain period and then sleeps for a certain minimum time. It more or less does the job, i.e. guarantees that the thread will not use more than 50% of CPU. However, behavior is dependent on the number of CPU cores (huge disadvantage) and simply ugly (smaller disadvantage :)). Any ideas?
I am not aware of any API to do get the OS's scheduler to do what you want (even if your thread is idle-priority, if there are no higher-priority ready threads, yours will run). However, I think you can improvise a fairly elegant throttling function based on what you are already doing. Essentially (I don't have a Windows dev machine handy): Pick a default amount of time the thread will sleep each iteration. Then, on each iteration (or on every nth iteration, such that the throttling function doesn't itself become a significant CPU load), Compute the amount of CPU time your thread used since the last time your throttling function was called (I'll call this dCPU). You can use the GetThreadTimes() API to get the amount of time your thread has been executing. Compute the amount of real time elapsed since the last time your throttling function was called (I'll call this dClock). dCPU / dClock is the percent CPU usage (of one CPU). If it is higher than you want, increase your sleep time, if lower, decrease the sleep time. Have your thread sleep for the computed time. Depending on how your watchdog computes CPU usage, you might want to use GetProcessAffinityMask() to find out how many CPUs the system has. dCPU / (dClock * CPUs) is the percentage of total CPU time available. You will still have to pick some magic numbers for the initial sleep time and the increment/decrement amount, but I think this algorithm could be tuned to keep a thread running at fairly close to a determined percent of CPU.
2,256
2,490
Mapping Stream data to data structures in C#
Is there a way of mapping data collected on a stream or array to a data structure or vice-versa? In C++ this would simply be a matter of casting a pointer to the stream as a data type I want to use (or vice-versa for the reverse) eg: in C++ Mystruct * pMyStrct = (Mystruct*)&SomeDataStream; pMyStrct->Item1 = 25; int iReadData = pMyStrct->Item2; obviously the C++ way is pretty unsafe unless you are sure of the quality of the stream data when reading incoming data, but for outgoing data is super quick and easy.
Most people use .NET serialization (there is faster binary and slower XML formatter, they both depend on reflection and are version tolerant to certain degree) However, if you want the fastest (unsafe) way - why not: Writing: YourStruct o = new YourStruct(); byte[] buffer = new byte[Marshal.SizeOf(typeof(YourStruct))]; GCHandle handle = GCHandle.Alloc(buffer, GCHandleType.Pinned); Marshal.StructureToPtr(o, handle.AddrOfPinnedObject(), false); handle.Free(); Reading: handle = GCHandle.Alloc(buffer, GCHandleType.Pinned); o = (YourStruct)Marshal.PtrToStructure(handle.AddrOfPinnedObject(), typeof(YourStruct)); handle.Free();
2,756
2,762
Lightweight IDE for Linux
Even though I have a robust and fast computer (Pentium Dual Core 2.0 with 2Gb RAM), I'm always searching for lightweight software to have on it, so it runs fast even when many apps are up and running simultaneously. On the last few weeks I've been migrating gradually to Linux and want to install a free lightweight yet useful IDE to program on C++ and PHP. Syntax highlighting and code completion tips are must-haves.
gedit Syntax highlighting Fast, lightweight Tabs GUI
3,049
3,054
How do I configure and communicate with a serial port?
I need to send and receive data over serial connections (RS-232 and RS-422). How do I set up and communicate with such a connection? How do I figure out what the configuration settings (e.g. baud rate) should be and how do I set them? In particular I am looking to do this in Java, C/C++, or one of the major Unix shells but I also have some interest in serial programming using Windows/Hyperterminal.
Build a time machine and go back to 1987? Ho ho. Ok, no more snarky comments. How do I figure out what the configuration settings (e.g. baud rate) should be... Read the datasheet? Ok, ok. Seriously, last one. If you don't know the baud rate of the device you are trying to communicate with, you have two choices. Start guessing, or possibly bust out an o-scope. If you need a good starting point, let me suggest 9600-8-N-1. My suspicion is you can get there with brute force relatively quickly. There's a third option of having an old-school ninja who can tell just by the LOOK of the garbled characters at some standard baud rate what actual baud rate is. An impressive party trick to be sure. Hopefully though you have access to this information. In unix/linux, you can get ahold of minicom to play with the serial port directly. This should make it fairly quick to get the configuration figured out. one of the major Unix shells In Unix the serial port(s) is/are file-mapped into the /dev/ subdir. ttyS0, for example. If you setup the correct baud rate and whatnot using minicom, you can even cat stuff to that file to send stuff out there. On to the meat of the question, you can access it programmatically through the POSIX headers. termios.h is the big one. See: http://www.easysw.com/~mike/serial/serial.html#3_1 (NOT AVAILABLE ANYMORE) but I also have some interest in serial programming using Windows/Hyperterminal. Hyperterminal and minicom are basically the same program. As for how Windows let's you get access to the serial port, I'll leave that question for someone else. I haven't done that in Windows since the Win95 days.
3,150
35,963
How to set up unit testing for Visual Studio C++
I'm having trouble figuring out how to get the testing framework set up and usable in Visual Studio 2008 for C++ presumably with the built-in unit testing suite. Any links or tutorials would be appreciated.
This page may help, it reviews quite a few C++ unit test frameworks: CppUnit Boost.Test CppUnitLite NanoCppUnit Unit++ CxxTest Check out CPPUnitLite or CPPUnitLite2. CPPUnitLite was created by Michael Feathers, who originally ported Java's JUnit to C++ as CPPUnit (CPPUnit tries mimic the development model of JUnit - but C++ lacks Java's features [e.g. reflection] to make it easy to use). CPPUnitLite attempts to make a true C++-style testing framework, not a Java one ported to C++. (I'm paraphrasing from Feather's Working Effectively with Legacy Code book). CPPUnitLite2 seems to be another rewrite, with more features and bug fixes. I also just stumbled across UnitTest++ which includes stuff from CPPUnitLite2 and some other framework. Microsoft has released WinUnit. Also checkout Catch or Doctest
3,230
10,141
How do you pack a visual studio c++ project for release?
I'm wondering how to make a release build that includes all necessary dll files into the .exe so the program can be run on a non-development machine without it having to install the microsoft redistributable on the target machine. Without doing this you get the error message that the application configuration is not correct and to reinstall.
Choose Project -> Properties Select Configuration -> General In the box for how you should link MFC, choose to statically link it. Choose Linker -> Input. Under Additional Dependencies, add any libraries you need your app to statically link in.
3,231
842,632
C/C++ library for reading MIDI signals from a USB MIDI device
I want to write C/C++ programs that take input from a MIDI device. The MIDI device connects to my PC using a USB connector. I'm looking for a (C/C++ implemented) library that I can use to read the MIDI signals from the MIDI device through the USB port. I'm happy manipulating the MIDI data once I get it, I just don't want to have to implement the code for its capture. I'm planning on writing my code using the Bloodshed Dev-C++ IDE on Windows XP.
PortMidi is another open source cross-platform MIDI I/O library worth checking out. On the other hand, if you are working on a sysex type of app, then direct Win32 works easily enough. Just came across another open source cross-platform framework that includes MIDI support: Juce. Also, I should note that there isn't anything special about a USB connected MIDI device. It will still be presented as a MIDI device in Windows and you will use standard MIDI APIs (mmsystem) to communicate with it. [July 2014] I just came across RtMidi that looks to be a nice, compact, open source cross-platform C++ library.
4,306
4,311
What is the best way to create a sparse array in C++?
I am working on a project that requires the manipulation of enormous matrices, specifically pyramidal summation for a copula calculation. In short, I need to keep track of a relatively small number of values (usually a value of 1, and in rare cases more than 1) in a sea of zeros in the matrix (multidimensional array). A sparse array allows the user to store a small number of values, and assume all undefined records to be a preset value. Since it is not physically possibly to store all values in memory, I need to store only the few non-zero elements. This could be several million entries. Speed is a huge priority, and I would also like to dynamically choose the number of variables in the class at runtime. I currently work on a system that uses a binary search tree (b-tree) to store entries. Does anyone know of a better system?
For C++, a map works well. Several million objects won't be a problem. 10 million items took about 4.4 seconds and about 57 meg on my computer. My test application is as follows: #include <stdio.h> #include <stdlib.h> #include <map> class triple { public: int x; int y; int z; bool operator<(const triple &other) const { if (x < other.x) return true; if (other.x < x) return false; if (y < other.y) return true; if (other.y < y) return false; return z < other.z; } }; int main(int, char**) { std::map<triple,int> data; triple point; int i; for (i = 0; i < 10000000; ++i) { point.x = rand(); point.y = rand(); point.z = rand(); //printf("%d %d %d %d\n", i, point.x, point.y, point.z); data[point] = i; } return 0; } Now to dynamically choose the number of variables, the easiest solution is to represent index as a string, and then use string as a key for the map. For instance, an item located at [23][55] can be represented via "23,55" string. We can also extend this solution for higher dimensions; such as for three dimensions an arbitrary index will look like "34,45,56". A simple implementation of this technique is as follows: std::map data<string,int> data; char ix[100]; sprintf(ix, "%d,%d", x, y); // 2 vars data[ix] = i; sprintf(ix, "%d,%d,%d", x, y, z); // 3 vars data[ix] = i;
6,915
6,955
Thread safe lazy construction of a singleton in C++
Is there a way to implement a singleton object in C++ that is: Lazily constructed in a thread safe manner (two threads might simultaneously be the first user of the singleton - it should still only be constructed once). Doesn't rely on static variables being constructed beforehand (so the singleton object is itself safe to use during the construction of static variables). (I don't know my C++ well enough, but is it the case that integral and constant static variables are initialized before any code is executed (ie, even before static constructors are executed - their values may already be "initialized" in the program image)? If so - perhaps this can be exploited to implement a singleton mutex - which can in turn be used to guard the creation of the real singleton..) Excellent, it seems that I have a couple of good answers now (shame I can't mark 2 or 3 as being the answer). There appears to be two broad solutions: Use static initialisation (as opposed to dynamic initialisation) of a POD static variable, and implementing my own mutex with that using the builtin atomic instructions. This was the type of solution I was hinting at in my question, and I believe I knew already. Use some other library function like pthread_once or boost::call_once. These I certainly didn't know about - and am very grateful for the answers posted.
Basically, you're asking for synchronized creation of a singleton, without using any synchronization (previously-constructed variables). In general, no, this is not possible. You need something available for synchronization. As for your other question, yes, static variables which can be statically initialized (i.e. no runtime code necessary) are guaranteed to be initialized before other code is executed. This makes it possible to use a statically-initialized mutex to synchronize creation of the singleton. From the 2003 revision of the C++ standard: Objects with static storage duration (3.7.1) shall be zero-initialized (8.5) before any other initialization takes place. Zero-initialization and initialization with a constant expression are collectively called static initialization; all other initialization is dynamic initialization. Objects of POD types (3.9) with static storage duration initialized with constant expressions (5.19) shall be initialized before any dynamic initialization takes place. Objects with static storage duration defined in namespace scope in the same translation unit and dynamically initialized shall be initialized in the order in which their definition appears in the translation unit. If you know that you will be using this singleton during the initialization of other static objects, I think you'll find that synchronization is a non-issue. To the best of my knowledge, all major compilers initialize static objects in a single thread, so thread-safety during static initialization. You can declare your singleton pointer to be NULL, and then check to see if it's been initialized before you use it. However, this assumes that you know that you'll use this singleton during static initialization. This is also not guaranteed by the standard, so if you want to be completely safe, use a statically-initialized mutex. Edit: Chris's suggestion to use an atomic compare-and-swap would certainly work. If portability is not an issue (and creating additional temporary singletons is not a problem), then it is a slightly lower overhead solution.
7,209
119,880
Alpha blending sprites in Nintendo DS Homebrew
I'm trying to alpha blend sprites and backgrounds with devkitPro (including libnds, libarm, etc). Does anyone know how to do this?
As a generic reference, i once wrote a small blog entry about that issue. Basically, you first have to define which layer is alpha-blended against which other layer(s). Afaik, the source layer(s) must be over destination layer(s) to have some blending displayed. that means the priority of source layers should be numerically lower than the the priority of destination layers. the source layer is what is going to be translucent, the destination(s) is what is going to be seen through (and yes, i find this rather confusing). For the sprites, specifically, you then have 3 ways to achieve alpha-blending depending on what you need and what you're "ready to pay" for it: You can make all the sprites have some alpha-blending by turning on BLEND_SRC_SPRITE in REG_BLDCNT[_SUB] ... not that useful. You can selectively turn on blending of some sprites by using ATTR0_TYPE_BLENDED. The blending level will be the same for all sprites (and layers) bitmap-type sprites use direct colors (bypassing the palettes), so the ATTR2_PALETTE() field of GBA sprites is useless and has been recycled into ATTR2_ALPHA.
7,489
13,384
How do I make a GUI?
I've made many different seperate parts of a GUI system for the Nintendo DS, like buttons and textboxes and select boxes, but I need a way of containing these classes in one Gui class, so that I can draw everything to the screen all at once, and check all the buttons at once to check if any are being pressed. My question is what is the best way organize all the classes (such as buttons and textboxes) into one GUI class? Here's one way I thought of but it doesn't seem right: Edit: I'm using C++. class Gui { public: void update_all(); void draw_all() const; int add_button(Button *button); // Returns button id void remove_button(int button_id); private: Button *buttons[10]; int num_buttons; } This code has a few problems, but I just wanted to give you an idea of what I want.
This question is very similar to one I was going to post, only mine is for Sony PSP programming. I've been toying with something for a while, I've consulted some books and VTMs, and so far this is a rough idea of a simple ui systems. class uiElement() { ... virtual void Update() = 0; virtual void Draw() = 0; ... } class uiButton() public : uiElement { ... virtual void Update(); virtual void Draw(); ... } class uiTextbox() public : uiElement { ... virtual void Update(); virtual void Draw(); ... } ... // Other ui Elements class uiWindow() { ... void Update(); void Draw(); void AddElement(uiElement *Element); void RemoveElement(uiElement *Element); std::list <uiElement*> Elements; ... } void uiWindow::Update() { ... for (list <uiElement*>::iterator it = Elements.begin(); it != Elements.end(); it++ ) it->Update(); ... } void uiWindow::Draw() { ... for (list <uiElement*>::iterator it = Elements.begin(); it != Elements.end(); it++ ) it->Draw(); ... } The princple is to create a window and attact ui Elements to it, and call the draw and update methods from the respective main functions. I don't have anything working yet, as I have issues with drawing code. With different APIs on the PC and PSP, I'm looking at some wrapper code for OpenGL and psp gu. Hope this helps. thing2k
7,525
7,695
Of Memory Management, Heap Corruption, and C++
So, I need some help. I am working on a project in C++. However, I think I have somehow managed to corrupt my heap. This is based on the fact that I added an std::string to a class and assigning it a value from another std::string: std::string hello = "Hello, world.\n"; /* exampleString = "Hello, world.\n" would work fine. */ exampleString = hello; crashes on my system with a stack dump. So basically I need to stop and go through all my code and memory management stuff and find out where I've screwed up. The codebase is still small (about 1000 lines), so this is easily do-able. Still, I'm over my head with this kind of stuff, so I thought I'd throw it out there. I'm on a Linux system and have poked around with valgrind, and while not knowing completely what I'm doing, it did report that the std::string's destructor was an invalid free. I have to admit to getting the term 'Heap Corruption' from a Google search; any general purpose articles on this sort of stuff would be appreciated as well. (In before rm -rf ProjectDir, do again in C# :D) EDIT: I haven't made it clear, but what I'm asking for are ways an advice of diagnosing these sort of memory problems. I know the std::string stuff is right, so it's something I've done (or a bug, but there's Not A Problem With Select). I'm sure I could check the code I've written up and you very smart folks would see the problem in no time, but I want to add this kind of code analysis to my 'toolbox', as it were.
These are relatively cheap mechanisms for possibly solving the problem: Keep an eye on my heap corruption question - I'm updating with the answers as they shake out. The first was balancing new[] and delete[], but you're already doing that. Give valgrind more of a go; it's an excellent tool, and I only wish it was available under Windows. I only slows your program down by about half, which is pretty good compared to the Windows equivalents. Think about using the Google Performance Tools as a replacement malloc/new. Have you cleaned out all your object files and started over? Perhaps your make file is... "suboptimal" You're not assert()ing enough in your code. How do I know that without having seen it? Like flossing, no-one assert()s enough in their code. Add in a validation function for your objects and call that on method start and method end. Are you compiling -wall? If not, do so. Find yourself a lint tool like PC-Lint. A small app like yours might fit in the PC-lint demo page, meaning no purchase for you! Check you're NULLing out pointers after deleteing them. Nobody likes a dangling pointer. Same gig with declared but unallocated pointers. Stop using arrays. Use a vector instead. Don't use raw pointers. Use a smart pointer. Don't use auto_ptr! That thing is... surprising; its semantics are very odd. Instead, choose one of the Boost smart pointers, or something out of the Loki library.
7,664
7,669
Windows C++: How can I redirect stderr for calls to fprintf?
I am wrapping existing C++ code from a BSD project in our own custom wrapper and I want to integrate it to our code with as few changes as possible. This code uses fprintf to print to stderr in order to log / report errors. I want to redirect this to an alternative place within the same process. On Unix I have done this with a socketpair and a thread: one end of the socket is where I send stderr (via a call to dup2) and the other end is monitored in a thread, where I can then process the output. This does not work on Windows though because a socket is not the same as a file handle. All documents I have found on the web show how to redirect output from a child process, which is not what I want. How can I redirect stderr within the same process getting a callback of some sort when output is written? (and before you say so, I've tried SetStdHandle but cannot find any way to make this work)...
You can use a similar technique on Windows, you just need to use different words for the same concepts. :) This article: http://msdn.microsoft.com/en-us/library/ms682499.aspx uses a win32 pipe to handle I/O from another process, you just have to do the same thing with threads within the same process. Of course, in your case all output to stderr from anywhere in the process will be redirected to your consumer. Actually, other pieces of the puzzle you may need are _fdopen and _open_osfhandle. In fact, here's a related example from some code I released years ago: DWORD CALLBACK DoDebugThread(void *) { AllocConsole(); SetConsoleTitle("Copilot Debugger"); // The following is a really disgusting hack to make stdin and stdout attach // to the newly created console using the MSVC++ libraries. I hope other // operating systems don't need this kind of kludge.. :) stdout->_file = _open_osfhandle((long)GetStdHandle(STD_OUTPUT_HANDLE), _O_TEXT); stdin->_file = _open_osfhandle((long)GetStdHandle(STD_INPUT_HANDLE), _O_TEXT); debug(); stdout->_file = -1; stdin->_file = -1; FreeConsole(); CPU_run(); return 0; } In this case, the main process was a GUI process which doesn't start with stdio handles at all. It opens a console, then shoves the right handles into stdout and stdin so the debug() function (which was designed as a stdio interactive function) can interact with the newly created console. You should be able to open some pipes and do the same sort of thing to redirect stderr.
7,880
7,886
How do you open a file in C++?
I want to open a file for reading, the C++ way. I need to be able to do it for: text files, which would involve some sort of read line function. binary files, which would provide a way to read raw data into a char* buffer.
There are three ways to do this, depending on your needs. You could use the old-school C way and call fopen/fread/fclose, or you could use the C++ fstream facilities (ifstream/ofstream), or if you're using MFC, use the CFile class, which provides functions to accomplish actual file operations. All of these are suitable for both text and binary, though none have a specific readline functionality. What you'd most likely do instead in that case is use the fstream classes (fstream.h) and use the stream operators (<< and >>) or the read function to read/write blocks of text: int nsize = 10; std::vector<char> somedata(nsize); ifstream myfile; myfile.open("<path to file>"); myfile.read(somedata.data(), nsize); myfile.close(); Note that, if you're using Visual Studio 2005 or higher, traditional fstream may not be available (there's a new Microsoft implementation, which is slightly different, but accomplishes the same thing).
8,263
8,347
I can't get my debugger to stop breaking on first-chance exceptions
I'm using Visual C++ 2003 to debug a program remotely via TCP/IP. I had set the Win32 exception c00000005, "Access violation," to break into the debugger when thrown. Then, I set it back to "Use parent setting." The setting for the parent, Win32 Exceptions, is to continue when the exception is thrown. Now, when I debug the program, it breaks each time that exception is thrown, forcing me to click Continue to let it keep debugging. How do I get it to stop breaking like this?
I'd like to support Will Dean's answer An access violation sounds like an actual bug in your code. It's not something I'd expect the underlying C/++ Runtime to be throwing and catching internally. The 'first-chance-exceptions' feature is so you can intercept things which get 'caught' in code, using the debugger, and have a look. If there's nothing 'catching' that exception (which makes sense, why on earth would you catch and ignore access violations?), then it will trigger the debugger regardless of what options you may have set.
8,451
8,486
Secure Memory Allocator in C++
I want to create an allocator which provides memory with the following attributes: cannot be paged to disk. is incredibly hard to access through an attached debugger The idea is that this will contain sensitive information (like licence information) which should be inaccessible to the user. I have done the usual research online and asked a few other people about this, but I cannot find a good place start on this problem. Updates Josh mentions using VirtualAlloc to set protection on the memory space. I have created a custom allocator ( shown below ) I have found the using the VirtualLock function it limits the amount of memory I can allocate. This seems to be by design though. Since I am using it for small objects this is not a problem. // template<class _Ty> class LockedVirtualMemAllocator : public std::allocator<_Ty> { public: template<class _Other> LockedVirtualMemAllocator<_Ty>& operator=(const LockedVirtualMemAllocator<_Other>&) { // assign from a related LockedVirtualMemAllocator (do nothing) return (*this); } template<class Other> struct rebind { typedef LockedVirtualMemAllocator<Other> other; }; pointer allocate( size_type _n ) { SIZE_T allocLen = (_n * sizeof(_Ty)); DWORD allocType = MEM_COMMIT; DWORD allocProtect = PAGE_READWRITE; LPVOID pMem = ::VirtualAlloc( NULL, allocLen, allocType, allocProtect ); if ( pMem != NULL ) { ::VirtualLock( pMem, allocLen ); } return reinterpret_cast<pointer>( pMem ); } pointer allocate( size_type _n, const void* ) { return allocate( _n ); } void deallocate(void* _pPtr, size_type _n ) { if ( _pPtr != NULL ) { SIZE_T allocLen = (_n * sizeof(_Ty)); ::SecureZeroMemory( _pPtr, allocLen ); ::VirtualUnlock( _pPtr, allocLen ); ::VirtualFree( _pPtr, 0, MEM_RELEASE ); } } }; and is used //a memory safe std::string typedef std::basic_string<char, std::char_traits<char>, LockedVirtualMemAllocato<char> > modulestring_t; Ted Percival mentions mlock, but I have no implementation of that yet. I found Practical Cryptography by Neil Furguson and Bruce Schneier quite helpful as well.
You can't really protect against memory access. You can probably prevent paging if you are running as an admin or as the system, but you cannot prevent the admin or system from reading your memory. Even if you could somehow completely block other processes from reading your memory (which you can't), another process could still actually inject a new thread into your process and read the memory that way. Even if you could somehow completely lock down your process and guarantee that the OS would never allow anyone else to access your process, you still don't have full protection. The entire OS could be running in a virtual machine, which could be paused and inspected at any time. You cannot protect memory contents from the owner of the system. Hollywood and the music industry have been aching for this for years. If it were possible, they'd already be doing it.
8,896
2,356,710
Can I get Memcached running on a Windows (x64) 64bit environment?
Does anyone know IF, WHEN or HOW I can get Memcached running on a Windows 64bit environment? I'm setting up a new hosting solution and would much prefer to run a 64bit OS, and since it's an ASP.Net MVC solution with SQL Server DB, the OS is either going to be Windows Server 2003 or (hopefully!) 2008. I know that this could spill over into a debate regarding 32bit vs 64bit on servers, but let's just say that my preference is 64bit and that I have some very good reasons. So far, I've tried a number of options and found a bit of help related to getting this up on a 32bit machine (and succeeded I might add), but since the original Windows port is Win32 specific, this is hardly going to help when installing as a service on x64. It also has a dependency on the libevent for which I can only get a Win32 compiled version. I suspect that simply loading all this up in C++ and hitting "compile" (for 64bit) wouldn't work, not least because of the intricate differences in 32 and 64bit architectures, but I'm wondering if anyone is working on getting this off the ground? Unfortunately, my expertise lie in managed code (C#) only, otherwise I would try and take this on myself, but I can't believe I'm the only guy out there trying to get memcached running on a 64 bit Windows server....am I? Update Yes I'm afraid I'm still looking for an answer to this - all my efforts (with my pathetic C++ skills) to make a stable build have failed - I've trashed one server and 3 VM's just trying it out so now I turn to the real experts. Is anyone planning on porting this to 64bit? Or are you really suggesting that I use MS Velocity instead? I shudder at the thought. Update: @Lars - I do use Enyim actually - it's very good, but what you're referring to is a client, rather than the server part. @DannySmurf - I've only been able to install it as a service on a 32 bit OS. 64 bit OS rejects the installation of this Win32 service. Of course yes, lots of Win32 code works seamlessly on x64 architecture, hence you can run 32bit apps (like Office for instance) or games on Vista/XP 64 etc, but this doesn't translate directly when it comes to services. I'm no expert, I suspect that it has to do with the syncs or eventing that services need to subscribe to, and I suspect that 64 and 32 don't play nicely. I'm happy to be corrected on any of this, but to answer your question - yes I have tried. @OJ - thanks very much for the straight-forward response. I thought as much, but wasn't sure if anyone else had suggestions or had already gone down this route. Maybe when StackOverflow is LIVE, then more people will respond and let me know if this is something being looked into, and although I can try and compile it myself - I simply can't "trust" (with my C++ experience level) that it would provide "Enterprise Level" reliability in such a crucial component of large scalable solutions. I think it would need educated intervention rather than my unsanitised experimental approach before I could be confident. One little oversight on my part, could bring the site down. Oh well... till next time.
North Scale labs have released a build of memcached 1.4.4 for Windows x64: http://blog.couchbase.com/memcached-windows-64-bit-pre-release-available http://labs.northscale.com/memcached-packages/ UPDATE: they have recently released Memcached Server - still FREE but enhanced distro with clustering, web-based admin/stats UI etc. (I'm not related to them in any way) Check it out at http://northscale.com/products/memcached.html and download at: http://www.northscale.com/download.php?a=d UPDATE 2: NorthScale Memcached is no longer available as a standalone download. Now they have made it part of their commercial No-SQL DB offering called Membase. It can be configured to run in Memcached-only mode (i.e. without persistence) and there's a 100% free version too. Check it out here: http://www.membase.org/downloads UPDATE 3: MemBase has slept with CouchDB and produced a hybrid product offering, called CouchBase. They still do offer a free "Community" version at http://www.couchbase.com/download
9,321
9,328
How do you create a static class in C++?
How do you create a static class in C++? I should be able to do something like: cout << "bit 5 is " << BitParser::getBitAt(buffer, 5) << endl; Assuming I created the BitParser class. What would the BitParser class definition look like?
If you're looking for a way of applying the "static" keyword to a class, like you can in C# for example, then you won't be able to without using Managed C++. But the looks of your sample, you just need to create a public static method on your BitParser object. Like so: BitParser.h class BitParser { public: static bool getBitAt(int buffer, int bitIndex); // ...lots of great stuff private: // Disallow creating an instance of this object BitParser() {} }; BitParser.cpp bool BitParser::getBitAt(int buffer, int bitIndex) { bool isBitSet = false; // .. determine if bit is set return isBitSet; } You can use this code to call the method in the same way as your example code.
9,589
12,064
Is there a good tool for Makefile generation?
I'm looking for a tool which can generate a Makefile for a C/C++ project for different compilers (GCC, Microsoft Visual C++, C++Builder, etc.) and different platforms (Windows, Linux, and Mac).
Other suggestions you may want to consider: Scons is a cross-platform, cross-compiler build library, uses Python scripting for the build systems. Used in a variety of large projects, and performs very well. If you're using Qt, QMake is a nice build system too. CMake is also pretty sweet. Finally, if all else fails...
9,667
9,670
Validating a Win32 Window Handle
Given a handle of type HWND is it possible to confirm that the handle represents a real window?
There is a function IsWindow which does exactly what you asked for. BOOL isRealHandle = IsWindow(unknwodnHandle); Look at this link for more information.
10,059
10,555
Removing icon from Windows title bars without ditching close button or system menu?
I'm developing an MFC application and I've recently been searching for a good method to remove the icon from a Windows title bar, but retain the close button. The two obvious candidate solutions are turning off the system menu style or using the tool window style, but I would prefer not to disable the system menu or use the shrunken tool window title bar. Many MFC applications have this functionality, so I wonder: am I missing some standard way of doing this?
You can use WM_NCRBUTTONDOWN to detect if the user has right-clicked on your caption and then bring up the system menu.
10,880
10,890
Any good advice on using emacs for C++ project?
I'm looking for a good article on using emacs as C/C++ IDE. Something like Steve Yegge's "Effective emacs".
No specific article, really, but I've found EmacsWiki to be full of useful information. Consider checking out these entries: CPlusPlus as a starting point for many C++-related articles, and CppTemplate to define a template that can give you a good skeleton when you start new files
10,901
11,004
Future proofing a large UI Application - MFC with 2008 Feature pack, or C# and Winforms?
My company has developed a long standing product using MFC in Visual C++ as the defacto standard for UI development. Our codebase contains ALOT of legacy/archaic code which must be kept operational. Some of this code is older than me (originally written in the late 70s) and some members of our team are still on Visual Studio 6. However, a conclusion has thankfully been reached internally that our product is looking somewhat antiquated compared to our competitors', and that something needs to be done. I am currently working on a new area of the UI which is quite separate from the rest of the product. I have therefore been given the chance to try out 'new' technology stacks as a sort of proving ground before the long process of moving over the rest of the UI begins. I have been using C# with Windows Forms and the .net framework for a while in my spare time and enjoy it, but am somewhat worried about the headaches caused by interop. While this particular branch of the UI won't require much interop with the legacy C++ codebase, I can forsee this becoming an issue in the future. The alternative is just to continue with MFC, but try and take advantage of the new feature pack that shipped with VS2008. This I guess is the easiest option, but I worry about longevity and not taking advantage of the goodness that is .net... So, which do I pick? We're a small team so my recommendation will quite probably be accepted as a future direction for our development - I want to get it right. Is MFC dead? Is C#/Winforms the way forward? Is there anything else I'm totally missing? Help greatly appreciated!
I'm a developer on an app that has a ton of legacy MFC code, and we have all of your same concerns. A big driver for our strategy was to eliminate as much risk and uncertainty as we could, which meant avoiding The Big Rewrite. As we all know, TBR fails most of the time. So we chose an incremental approach that allows us to preserve modules that won't be changing in the current release, writing new features managed, andporting features that are getting enhancements to managed. You can do this several ways: Host WPF content on your MFC views (see here) For MFC MDI apps, create a new WinForms framework and host your MFC MDI views (see here) Host WinForms user controls in MFC Dialogs and Views (see here) The problem with adopting WPF (option 1) is that it will require you to rewrite all of your UI at once, otherwise it'll look pretty schizophrenic. The second approach looks viable but very complicated. The third approach is the one we selected and it's been working very well. It allows you to selectively refresh areas of your app while maintaining overall consistency and not touching things that aren't broken. The Visual C++ 2008 Feature Pack looks interesting, I haven't played with it though. Seems like it might help with your issue of outdated look. If the "ribbon" would be too jarring for your users you could look at third-party MFC and/or WinForms control vendors. My overall recommendation is that interop + incremental change is definitely preferable to sweeping changes. After reading your follow-up, I can definitely confirm that the productivity gains of the framework vastly outweigh the investment in learning it. Nobody on our team had used C# at the start of this effort and now we all prefer it.
10,985
11,299
How to prevent an object being created on the heap?
Does anyone know how I can, in platform-independent C++ code prevent an object from being created on the heap? That is, for a class "Foo", I want to prevent users from doing this: Foo *ptr = new Foo; and only allow them to do this: Foo myfooObject; Does anyone have any ideas? Cheers,
Nick's answer is a good starting point, but incomplete, as you actually need to overload: private: void* operator new(size_t); // standard new void* operator new(size_t, void*); // placement new void* operator new[](size_t); // array new void* operator new[](size_t, void*); // placement array new (Good coding practice would suggest you should also overload the delete and delete[] operators -- I would, but since they're not going to get called it isn't really necessary.) Pauldoo is also correct that this doesn't survive aggregating on Foo, although it does survive inheriting from Foo. You could do some template meta-programming magic to HELP prevent this, but it would not be immune to "evil users" and thus is probably not worth the complication. Documentation of how it should be used, and code review to ensure it is used properly, are the only ~100% way.
11,060
11,074
How should I unit test a code-generator?
This is a difficult and open-ended question I know, but I thought I'd throw it to the floor and see if anyone had any interesting suggestions. I have developed a code-generator that takes our python interface to our C++ code (generated via SWIG) and generates code needed to expose this as WebServices. When I developed this code I did it using TDD, but I've found my tests to be brittle as hell. Because each test essentially wanted to verify that for a given bit of input code (which happens to be a C++ header) I'd get a given bit of outputted code I wrote a small engine that reads test definitions from XML input files and generates test cases from these expectations. The problem is I dread going in to modify the code at all. That and the fact that the unit tests themselves are a: complex, and b: brittle. So I'm trying to think of alternative approaches to this problem, and it strikes me I'm perhaps tackling it the wrong way. Maybe I need to focus more on the outcome, IE: does the code I generate actually run and do what I want it to, rather than, does the code look the way I want it to. Has anyone got any experiences of something similar to this they would care to share?
I started writing up a summary of my experience with my own code generator, then went back and re-read your question and found you had already touched upon the same issues yourself, focus on the execution results instead of the code layout/look. Problem is, this is hard to test, the generated code might not be suited to actually run in the environment of the unit test system, and how do you encode the expected results? I've found that you need to break down the code generator into smaller pieces and unit test those. Unit testing a full code generator is more like integration testing than unit testing if you ask me.
11,127
11,132
In C++/Windows how do I get the network name of the computer I'm on?
In a C++ Windows (XP and NT, if it makes a difference) application I'm working on, I need to get the network name associated with the computer the code is executing on, so that I can convert local filenames from C:\filename.ext to \\network_name\C$\filename.ext. How would I do this? Alternatively, if there's a function that will just do the conversion I described, that would be even better. I looked into WNetGetUniversalName, but that doesn't seem to work with local (C drive) files.
You'll want Win32's GetComputerName: http://msdn.microsoft.com/en-us/library/ms724295(VS.85).aspx
11,405
11,414
GCC issue: using a member of a base class that depends on a template argument
The following code doesn't compile with gcc, but does with Visual Studio: template <typename T> class A { public: T foo; }; template <typename T> class B: public A <T> { public: void bar() { cout << foo << endl; } }; I get the error: test.cpp: In member function ‘void B::bar()’: test.cpp:11: error: ‘foo’ was not declared in this scope But it should be! If I change bar to void bar() { cout << this->foo << endl; } then it does compile, but I don't think I have to do this. Is there something in the official specs of C++ that GCC is following here, or is it just a quirk?
This changed in gcc-3.4. The C++ parser got much more strict in that release -- per the spec but still kinda annoying for people with legacy or multi-platform code bases.
11,491
11,502
String To Lower/Upper in C++
What is the best way people have found to do String to Lower case / Upper case in C++? The issue is complicated by the fact that C++ isn't an English only programming language. Is there a good multilingual method?
#include <algorithm> std::string data = "Abc"; std::transform(data.begin(), data.end(), data.begin(), ::toupper); http://notfaq.wordpress.com/2007/08/04/cc-convert-string-to-upperlower-case/ Also, CodeProject article for common string methods: http://www.codeproject.com/KB/stl/STL_string_util.aspx
11,562
2,684,544
How to overload std::swap()
std::swap() is used by many std containers (such as std::list and std::vector) during sorting and even assignment. But the std implementation of swap() is very generalized and rather inefficient for custom types. Thus efficiency can be gained by overloading std::swap() with a custom type specific implementation. But how can you implement it so it will be used by the std containers?
The right way to overload std::swap's implemention (aka specializing it), is to write it in the same namespace as what you're swapping, so that it can be found via argument-dependent lookup (ADL). One particularly easy thing to do is: class X { // ... friend void swap(X& a, X& b) { using std::swap; // bring in swap for built-in types swap(a.base1, b.base1); swap(a.base2, b.base2); // ... swap(a.member1, b.member1); swap(a.member2, b.member2); // ... } };
11,635
315,463
Case-insensitive string comparison in C++
What is the best way of doing case-insensitive string comparison in C++ without transforming a string to all uppercase or all lowercase? Please indicate whether the methods are Unicode-friendly and how portable they are.
Boost includes a handy algorithm for this: #include <boost/algorithm/string.hpp> // Or, for fewer header dependencies: //#include <boost/algorithm/string/predicate.hpp> std::string str1 = "hello, world!"; std::string str2 = "HELLO, WORLD!"; if (boost::iequals(str1, str2)) { // Strings are identical }
11,854
11,935
Inheritance and Polymorphism - Ease of use vs Purity
In a project our team is using object lists to perform mass operations on sets of data that should all be processed in a similar way. In particular, different objects would ideally act the same, which would be very easily achieved with polymorphism. The problem I have with it is that inheritance implies the is a relationship, rather than the has a relationship. For example, several objects have a damage counter, but to make this easy to use in an object list, polymorphism could be used - except that would imply an is a relationship which wouldn't be true. (A person is not a damage counter.) The only solution I can think of is to have a member of the class return the proper object type when implicitly casted instead of relying on inheritance. Would it be better to forgo the is a / has a ideal in exchange for ease of programming? Edit: To be more specific, I am using C++, so using polymorphism would allow the different objects to "act the same" in the sense that the derived classes could reside within a single list and be operated upon by a virtual function of the base class. The use of an interface (or imitating them via inheritance) seems like a solution I would be willing to use.
This can be accomplished using multiple inheritance. In your specific case (C++), you can use pure virtual classes as interfaces. This allows you to have multiple inheritance without creating scope/ambiguity problems. Example: class Damage { virtual void addDamage(int d) = 0; virtual int getDamage() = 0; }; class Person : public virtual Damage { void addDamage(int d) { // ... damage += d * 2; } int getDamage() { return damage; } }; class Car : public virtual Damage { void addDamage(int d) { // ... damage += d; } int getDamage() { return damage; } }; Now both Person and Car 'is-a' Damage, meaning, they implement the Damage interface. The use of pure virtual classes (so that they are like interfaces) is key and should be used frequently. It insulates future changes from altering the entire system. Read up on the Open-Closed Principle for more information.
12,319
265,407
_wfopen equivalent under Mac OS X
I'm looking to the equivalent of Windows _wfopen() under Mac OS X. Any idea? I need this in order to port a Windows library that uses wchar* for its File interface. As this is intended to be a cross-platform library, I am unable to rely on how the client application will get the file path and give it to the library.
POSIX API in Mac OS X are usable with UTF-8 strings. In order to convert a wchar_t string to UTF-8, it is possible to use the CoreFoundation framework from Mac OS X. Here is a class that will wrap an UTF-8 generated string from a wchar_t string. class Utf8 { public: Utf8(const wchar_t* wsz): m_utf8(NULL) { // OS X uses 32-bit wchar const int bytes = wcslen(wsz) * sizeof(wchar_t); // comp_bLittleEndian is in the lib I use in order to detect PowerPC/Intel CFStringEncoding encoding = comp_bLittleEndian ? kCFStringEncodingUTF32LE : kCFStringEncodingUTF32BE; CFStringRef str = CFStringCreateWithBytesNoCopy(NULL, (const UInt8*)wsz, bytes, encoding, false, kCFAllocatorNull ); const int bytesUtf8 = CFStringGetMaximumSizeOfFileSystemRepresentation(str); m_utf8 = new char[bytesUtf8]; CFStringGetFileSystemRepresentation(str, m_utf8, bytesUtf8); CFRelease(str); } ~Utf8() { if( m_utf8 ) { delete[] m_utf8; } } public: operator const char*() const { return m_utf8; } private: char* m_utf8; }; Usage: const wchar_t wsz = L"Here is some Unicode content: éà€œæ"; const Utf8 utf8 = wsz; FILE* file = fopen(utf8, "r"); This will work for reading or writing files.
12,332
12,408
In a C/C++ program, how does the system (Windows, Linux, and Mac OS X) call the main() function?
I am looking for a more technical explanation than the OS calls the function. Is there a website or book?
The .exe file (or equivalent on other platforms) contains an 'entry point' address. To a first approximation, the OS loads the relevant sections of the .EXE file into RAM, and then jumps to the entry point. As others have said, this entry point will not be 'main', but will instead be a part of the runtime library - it will do things like initialising static objects, setting up the argc and argv parameters, setting up standard input, standard output, standard error, etc. When it's done all that, it will call your main() function. When main exits, the runtime goes through an analogous process of passing your return code back to the environment, calling static destructors, calling _atexit routines, etc. If you have Microsoft tools (perhaps not the freebie ones), then you have all the runtime source, and an easy way to look at it is to put a breakpoint on the closing brace of your main() method, and single step back up into the runtime.
12,594
12,599
Windows/C++: How do I determine the share name associated with a shared drive?
Let's say I have a drive such as C:\, and I want to find out if it's shared and what it's share name (e.g. C$) is. To find out if it's shared, I can use NetShareCheck. How do I then map the drive to its share name? I thought that NetShareGetInfo would do it, but it looks like that takes the share name, not the local drive name, as an input.
If all else fails, you could always use NetShareEnum and call NetShareGetInfo on each.
12,633
12,677
What is the easiest way to parse an INI File in C++?
I'm trying to parse an INI file using C++. Any tips on what is the best way to achieve this? Should I use the Windows API tools for INI file processing (with which I am totally unfamiliar), an open-source solution or attempt to parse it manually?
You can use the Windows API functions, such as GetPrivateProfileString() and GetPrivateProfileInt().
12,716
12,751
Problems with #import of .NET out-of-proc server
In C++ program, I am trying to #import TLB of .NET out-of-proc server. I get errors like: z:\server.tlh(111) : error C2146: syntax error : missing ';' before identifier 'GetType' z:\server.tlh(111) : error C2501: '_TypePtr' : missing storage-class or type specifiers z:\server.tli(74) : error C2143: syntax error : missing ';' before 'tag::id' z:\server.tli(74) : error C2433: '_TypePtr' : 'inline' not permitted on data declarations z:\server.tli(74) : error C2501: '_TypePtr' : missing storage-class or type specifiers z:\server.tli(74) : fatal error C1004: unexpected end of file found The TLH looks like: _bstr_t GetToString(); VARIANT_BOOL Equals (const _variant_t & obj); long GetHashCode(); _TypePtr GetType(); long Open(); I am not really interested in the having the base object .NET object methods like GetType(), Equals(), etc. But GetType() seems to be causing problems. Some google research indicates I could #import mscorlib.tlb (or put it in path), but I can't get that to compile either. Any tips?
Added no_namespace and raw_interfaces_only to my #import: #import "server.tlb" no_namespace named_guids Also using TLBEXP.EXE instead of REGASM.EXE seems to help this issue.
13,128
13,140
How can I combine several C/C++ libraries into one?
I'm tired of adding ten link libraries into my project, or requiring eight of them to use my own. I'd like to take existing libraries like libpng.a, libz.a, libjpeg.a, and combine them into one single .a library. Is that possible? How about combining .lib libraries?
On Unix like systems, the ld and ar utilities can do this. Check out http://en.wikipedia.org/wiki/Ar_(Unix) or lookup the man pages on any Linux box or through Google, e.g., 'Unix man ar'. Please note that you might be better off linking to a shared (dynamic) library. This would add a dependency to your executable, but it will dramatically reduce its size, especially if you're writing a graphic application.
13,473
13,481
How does one rank an array (sort) by value? *With a twist*
I would like to sort an array in ascending order using C/C++. The outcome is an array containing element indexes. Each index is corespondent to the element location in the sorted array. Example Input: 1, 3, 4, 9, 6 Output: 1, 2, 3, 5, 4 Edit: I am using shell sort procedure. The duplicate value indexes are arbitrarily chosen based on which duplicate values are first in the original array. Update: Despite my best efforts, I haven't been able to implement a sorting algorithm for an array of pointers. The current example won't compile. Could someone please tell me what's wrong? I'd very much appreciate some help! void SortArray(int ** pArray, int ArrayLength) { int i, j, flag = 1; // set flag to 1 to begin initial pass int * temp; // holding variable orig with no * for (i = 1; (i <= ArrayLength) && flag; i++) { flag = 0; for (j = 0; j < (ArrayLength - 1); j++) { if (*pArray[j + 1] > *pArray[j]) // ascending order simply changes to < { &temp = &pArray[j]; // swap elements &pArray[j] = &pArray[j + 1]; //the problem lies somewhere in here &pArray[j + 1] = &temp; flag = 1; // indicates that a swap occurred. } } } };
Since you're using C++, I would do it something like this. The SortIntPointers function can be any sort algorithm, the important part is that it sorts the array of pointers based on the int that they are pointing to. Once that is done, you can go through the array of pointers and assign their sorted index which will end up in the original position in the original array. int* intArray; // set somewhere else int arrayLen; // set somewhere else int** pintArray = new int*[arrayLen]; for(int i = 0; i < arrayLen; ++i) { pintArray[i] = &intArray[i]; } // This function sorts the pointers according to the values they // point to. In effect, it sorts intArray without losing the positional // information. SortIntPointers(pintArray, arrayLen); // Dereference the pointers and assign their sorted position. for(int i = 0; i < arrayLen; ++i) { *pintArray[i] = i; } Hopefully that's clear enough.
13,607
13,618
GUI system development resources?
Could someone recommend any good resources for creating Graphics User Interfaces, preferably in C/C++? Currently my biggest influence is 3DBuzz.com's C++/OpenGL VTMs (Video Training Modules). While they are very good, they cover a large area of graphics programming, so only skim the surface of GUI programming. This question does relate to "How do I make a GUI?", where there is also a rough outline of my current structure. Any response would be appreciated. Edit: I've just read some of the answers and spotted that I've missed some points. I had to type the question twice as I caught the wrong button and lost it. I missed two important points, first: This will be used cross platform including homebrew on a Sony PSP. Second: I want to create a GUI system not use an existing one. Edit 2: I think some of you are missing the point, I don't what to use an existing GUI system I want to build one. Qt in it's current form is not portable to the PSP, never mind the overkill of such a task. That said I've decided to create an IM-GUI, and have started to prototype the code.
I wouldn't use OpenGL for the GUI unless you are planning for hardware accelerated and/or 3D effects that you don't think you would get with a more traditional GUI toolkit (e.g Qt, wxWidgets, GTK, etc). If you just need a quick simple GUI for hosting your OpenGL graphics then FLTK is a nice choice. Otherwise, for rendering the GUI directly in OpenGL their are libraries like Crazy Eddie's GUI that do just that and provide lots of skinnable widgets that you won't have to reinvent. The window and OpenGL context could then be provide with a portable library like SDL. EDIT: Now that I've gone back and taken at look your other post I think I have a better understanding of what you are asking. For a GUI on an embedded system like the Nintendo DS, I would consider using an "immediate mode" GUI. Jari Komppa has a good tutorial about them, but you could use a more object-oriented approach with C++ than the C code he presents.
13,806
13,870
Any Windows APIs to get file handles besides createfile and openfile?
I am trying to snoop on a log file that an application is writing to. I have successfully hooked createfile with the detours library from MSR, but createfile never seems to be called with file I am interested in snooping on. I have also tried hooking openfile with the same results. I am not an experienced Windows/C++ programmer, so my initial two thoughts were either that the application calls createfile before I hook the apis, or that there is some other API for creating files/obtaining handles for them.
You can use Sysinternal's FileMon. It is an excellent monitor that can tell you exactly which file-related system calls are being made and what are the parameters. I think that this approach is much easier than hooking API calls and much less intrusive.
14,126
14,127
How to create a C++ Boost undirected graph and traverse it in depth first search (DFS) order?
How to create a C++ Boost undirected graph and traverse it in depth first search (DFS) order?
// Boost DFS example on an undirected graph. // Create a sample graph, traverse its nodes // in DFS order and print out their values. #include <boost/graph/adjacency_list.hpp> #include <boost/graph/depth_first_search.hpp> #include <iostream> using namespace std; typedef boost::adjacency_list<boost::listS, boost::vecS, boost::undirectedS> MyGraph; typedef boost::graph_traits<MyGraph>::vertex_descriptor MyVertex; class MyVisitor : public boost::default_dfs_visitor { public: void discover_vertex(MyVertex v, const MyGraph& g) const { cerr << v << endl; return; } }; int main() { MyGraph g; boost::add_edge(0, 1, g); boost::add_edge(0, 2, g); boost::add_edge(1, 2, g); boost::add_edge(1, 3, g); MyVisitor vis; boost::depth_first_search(g, boost::visitor(vis)); return 0; }
14,165
14,169
Strange C++ errors with code that has min()/max() calls
I'm seeing strange errors when my C++ code has min() or max() calls. I'm using Visual C++ compilers.
Check if your code is including the windows.h header file and either your code or other third-party headers have their own min()/max() definitions. If yes, then prepend your windows.h inclusion with a definition of NOMINMAX like this: #define NOMINMAX #include <windows.h>
14,271
14,272
Boost warnings with VC++ 9
When the Boost library/headers is used with VC++ 9 compilers (Visual C++ 2008 Express Edition or Visual Studio 2008), a lot of benign warnings are generated. They are of 2 kinds: Warning about the Wp64 setting. Warning about the compiler version. How can I turn off these warnings?
Warning about the Wp64 setting. Turn off the /Wp64 setting which is set by default. You can find it in Project Properties -> C/C++ -> General. Warning about the compiler version. Go to the Boost trunk (online) and get the latest boost\boost\config\compiler\visualc.hpp header file. Diff it with the current file and merge the sections where _MSC_VER is equal to 1800. (1800 is the VC9 version number used in Boost configuration.)
14,491
8,768,957
How can an MFC application terminate itself?
What is the proper way for an MFC application to cleanly close itself?
Programatically Terminate an MFC Application void ExitMFCApp() { // same as double-clicking on main window close box ASSERT(AfxGetMainWnd() != NULL); AfxGetMainWnd()->SendMessage(WM_CLOSE); } http://support.microsoft.com/kb/117320
15,240
15,269
How do you create a debug only function that takes a variable argument list? Like printf()
I'd like to make a debug logging function with the same parameters as printf. But one that can be removed by the pre-processor during optimized builds. For example: Debug_Print("Warning: value %d > 3!\n", value); I've looked at variadic macros but those aren't available on all platforms. gcc supports them, msvc does not.
I still do it the old way, by defining a macro (XTRACE, below) which correlates to either a no-op or a function call with a variable argument list. Internally, call vsnprintf so you can keep the printf syntax: #include <stdio.h> void XTrace0(LPCTSTR lpszText) { ::OutputDebugString(lpszText); } void XTrace(LPCTSTR lpszFormat, ...) { va_list args; va_start(args, lpszFormat); int nBuf; TCHAR szBuffer[512]; // get rid of this hard-coded buffer nBuf = _vsnprintf(szBuffer, 511, lpszFormat, args); ::OutputDebugString(szBuffer); va_end(args); } Then a typical #ifdef switch: #ifdef _DEBUG #define XTRACE XTrace #else #define XTRACE #endif Well that can be cleaned up quite a bit but it's the basic idea.
15,254
15,343
Can placement new for arrays be used in a portable way?
Is it possible to actually make use of placement new in portable code when using it for arrays? It appears that the pointer you get back from new[] is not always the same as the address you pass in (5.3.4, note 12 in the standard seems to confirm that this is correct), but I don't see how you can allocate a buffer for the array to go in if this is the case. The following example shows the problem. Compiled with Visual Studio, this example results in memory corruption: #include <new> #include <stdio.h> class A { public: A() : data(0) {} virtual ~A() {} int data; }; int main() { const int NUMELEMENTS=20; char *pBuffer = new char[NUMELEMENTS*sizeof(A)]; A *pA = new(pBuffer) A[NUMELEMENTS]; // With VC++, pA will be four bytes higher than pBuffer printf("Buffer address: %x, Array address: %x\n", pBuffer, pA); // Debug runtime will assert here due to heap corruption delete[] pBuffer; return 0; } Looking at the memory, the compiler seems to be using the first four bytes of the buffer to store a count of the number of items in it. This means that because the buffer is only sizeof(A)*NUMELEMENTS big, the last element in the array is written into unallocated heap. So the question is can you find out how much additional overhead your implementation wants in order to use placement new[] safely? Ideally, I need a technique that's portable between different compilers. Note that, at least in VC's case, the overhead seems to differ for different classes. For instance, if I remove the virtual destructor in the example, the address returned from new[] is the same as the address I pass in.
Personally I'd go with the option of not using placement new on the array and instead use placement new on each item in the array individually. For example: int main(int argc, char* argv[]) { const int NUMELEMENTS=20; char *pBuffer = new char[NUMELEMENTS*sizeof(A)]; A *pA = (A*)pBuffer; for(int i = 0; i < NUMELEMENTS; ++i) { pA[i] = new (pA + i) A(); } printf("Buffer address: %x, Array address: %x\n", pBuffer, pA); // dont forget to destroy! for(int i = 0; i < NUMELEMENTS; ++i) { pA[i].~A(); } delete[] pBuffer; return 0; } Regardless of the method you use, make sure you manually destroy each of those items in the array before you delete pBuffer, as you could end up with leaks ;) Note: I haven't compiled this, but I think it should work (I'm on a machine that doesn't have a C++ compiler installed). It still indicates the point :) Hope it helps in some way! Edit: The reason it needs to keep track of the number of elements is so that it can iterate through them when you call delete on the array and make sure the destructors are called on each of the objects. If it doesn't know how many there are it wouldn't be able to do this.
16,167
16,182
Good refactoring support for C++
The Visual Studio refactoring support for C# is quite good nowadays (though not half as good as some Java IDE's I've seen already) but I'm really missing C++ support. I have seen Refactor! and am currently trying it out, but maybe one of you guys know a better tool or plugin? I've been working with Visual Assist X now for a week or two and got totally addicted. Thanks for the tip, I'll try to convince my boss to get me a license at work too. I've been bughunting for a few days since Visual Assist X kept messing up my Visual Studio after a few specific refactorings, It took me (and customer support) a week to hunt down but let's say for now that Visual Assist X is not a good combination with ClipX.
Visual Assist X by Whole Tomato software is not free, but it's absolutely worth the money if you use Visual Studio for C++. http://www.wholetomato.com/
17,117
19,609
C++ std::tr2 for VS2005
Is Boost the only way for VS2005 users experience TR2? Also is there a idiot proof way of downloading only the TR2 related packages? I was looking at the boost installer provided by BoostPro Consulting. If I select the options for all the threading options with all the packages for MSVC8 it requires 1.1GB. While I am not short of space, it seems ridiculous that a library needs over a gigabyte of space and it takes BPC a long time to catch up with the current release. What packages do I need? I'm really only interested in those that comprise std::tr2 and can find that out by comparing those on offer to those in from the TR2 report and selecting those from the list but even then it isn't clear what is needed and the fact that it is a version behind annoys me. I know from previous encounters with Boost (1.33.1) that self compiling is a miserable experience: A lot of time wasted to get it started and then a hoard of errors passes across your screen faster than you can read, so what you are left with is an uneasy feeling that something is broken but you don't quite know what. I've never had these problems with any Apache library but that is another rant...
I believe you're actually referring to TR1, rather than TR2. The call for proposals for TR2 is open, but don't expect to see much movement until the new C++ standard is out. Also, although boost is a provider of an implementation of TR1, dinkumware and the GNU FSF are other providers - on VC2005 boost is probably the easiest way to access this functionality. The libraries from boost which are likely to be of most importance are reference smart pointer bind type traits array regular expressions The documentation for building boost has been gradually improving for the last few releases, the current getting started guide is quite detailed. smart pointer and bind, should work from header files, and IMO, these are the most useful elements of TR1.
17,299
17,312
How can I sort an array of double pointers based on the values they point to?
I am trying to build a function in C/C++ to sort an array and replace each value with its "score" or rank. It takes in a double pointer array to an array of ints, and sorts the double pointers based on the dereferenced value of the integers. I have tried quite a few times to make it work, but can't get it down. Once again, it must sort the double pointers based on the values they point to. This is what I have: void SortArray( int ** pArray, int ArrayLength ) { int i, j, flag = 1; // set flag to 1 to begin initial pass int * temp; // holding variable orig with no * for(i = 1; (i <= ArrayLength) && flag; i++) { flag = 0; for (j = 0; j < (ArrayLength -1); j++) { if (*pArray[j+1] > *pArray[j]) // ascending order simply changes to < { temp = &pArray[j]; // swap elements pArray[j] = &pArray[j+1]; pArray[j+1] = &temp; flag = 1; // indicates that a swap occurred. } } } }
You're close. You're referencing the address of the array items when you swap, which isn't necessary. The items in the array are pointers, and that's what needs to be swapped. See below: void SortArray( int ** pArray, int ArrayLength ) { int i, j, flag = 1; // set flag to 1 to begin initial pass int * temp; // holding variable orig with no * for(i = ArrayLength - 1; i > 0 && flag; i--) { flag = 0; for (j = 0; j < i; j++) { if (*pArray[j] > *pArray[j+1]) // ascending order simply changes to < { temp = pArray[j]; // swap elements pArray[j] = pArray[j+1]; pArray[j+1] = temp; flag = 1; // indicates that a swap occurred. } } } } Also, check out this lovely blog post on Bubble Sorting in case you're interested (sorry, shameless plug :)). Hope that helps you with your homework ;) Edit: Note the subtle "optimisation" where you count back from the array length and only increment up until 'i' in the inner loop. This saves you from needlessly reparsing items that have already been sorted.
17,434
17,443
When should you use 'friend' in C++?
I have been reading through the C++ FAQ and was curious about the friend declaration. I personally have never used it, however I am interested in exploring the language. What is a good example of using friend? Reading the FAQ a bit longer I like the idea of the << >> operator overloading and adding as a friend of those classes. However I am not sure how this doesn't break encapsulation. When can these exceptions stay within the strictness that is OOP?
Firstly (IMO) don't listen to people who say friend is not useful. It IS useful. In many situations you will have objects with data or functionality that are not intended to be publicly available. This is particularly true of large codebases with many authors who may only be superficially familiar with different areas. There ARE alternatives to the friend specifier, but often they are cumbersome (cpp-level concrete classes/masked typedefs) or not foolproof (comments or function name conventions). Onto the answer; The friend specifier allows the designated class access to protected data or functionality within the class making the friend statement. For example in the below code anyone may ask a child for their name, but only the mother and the child may change the name. You can take this simple example further by considering a more complex class such as a Window. Quite likely a Window will have many function/data elements that should not be publicly accessible, but ARE needed by a related class such as a WindowManager. class Child { //Mother class members can access the private parts of class Child. friend class Mother; public: string name( void ); protected: void setName( string newName ); };
17,483
17,486
Is there a way to prevent a method from being overridden in subclasses?
Is anyone aware of a language feature or technique in C++ to prevent a child class from over riding a particular method in the parent class? class Base { public: bool someGuaranteedResult() { return true; } }; class Child : public Base { public: bool someGuaranteedResult() { return false; /* Haha I broke things! */ } }; Even though it's not virtual, this is still allowed (at least in the Metrowerks compiler I'm using), all you get is a compile time warning about hiding non-virtual inherited function X.
A couple of ideas: Make your function private. Do not make your function virtual. This doesn't actually prevent the function from being shadowed by another definition though. Other than that, I'm not aware of a language feature that will lock away your function in such a way which prevents it from being overloaded and still able to be invoked through a pointer/reference to the child class. Good luck!
17,612
17,620
How do you place a file in recycle bin instead of delete?
Programmatic solution of course...
http://www.daveamenta.com/2008-05/c-delete-a-file-to-the-recycle-bin/ From above: using Microsoft.VisualBasic; string path = @"c:\myfile.txt"; FileIO.FileSystem.DeleteDirectory(path, FileIO.UIOption.OnlyErrorDialogs, RecycleOption.SendToRecycleBin);
17,670
17,683
Suitable alternative to CryptEncrypt
We have a situation in our product where for a long time some data has been stored in the application's database as SQL string (choice of MS SQL server or sybase SQL anywhere) which was encrypted via the Windows API function CryptEncrypt. (direct and de-cryptable) The problem is that CryptEncrypt can produce NULL's in the output, meaning that when it's stored in the database, the string manipulations will at some point truncate the CipherText. Ideally we'd like to use an algo that will produce CipherText that doesn't contain NULLs as that will cause the least amount of change to the existing databases (changing a column from string to binary and code to deal with binary instead of strings) and just decrypt existing data and re-encrypt with the new algorithm at database upgrade time. The algorithm doesn't need to be the most secure, as the database is already in a reasonably secure environment (not an open network / the inter-webs) but does need to be better than ROT13 (which I can almost decrypt in my head now!) edit: btw, any particular reason for changing ciphertext to cyphertext? ciphertext seems more widely used...
Any semi-decent algorithm will end up with a strong chance of generating a NULL value somewhere in the resulting ciphertext. Why not do something like base-64 encode your resulting binary blob before persisting to the DB? (sample implementation in C++).
17,786
17,793
Warning C4341 - 'XX': signed value is out of range for enum constant
When compiling my C++ .Net application I get 104 warnings of the type: Warning C4341 - 'XX': signed value is out of range for enum constant Where XX can be WCHAR LONG BIT BINARY GUID ... I can't seem to remove these warnings whatever I do. When I double click on them it takes me to a part of my code that uses OdbcParameters - any when I try a test project with all the rest of my stuff but no OdbcParameters it doesn't give the warnings. Any idea how I can get rid of these warnings? They're making real warnings from code I've actually written hard to see - and it just gives me a horrible feeling knowing my app has 104 warnings!
This is a compiler bug. Here's another post confirming it's a known issue. I've got the same issue in one of my projects and there's no way to prevent it from being triggered unless you have some way of avoiding the use of OdbcParameter. The most conservative way to suppress only the buggy warnings is to use #pragma warning( push ) #pragma warning( disable: 4341 ) // code affected by bug #pragma warning( pop )
17,928
21,773
Using an ocx in a console application
I want to quickly test an ocx. How do I drop that ocx in a console application. I have found some tutorials in CodeProject and but are incomplete.
Sure..it's pretty easy. Here's a fun app I threw together. I'm assuming you have Visual C++. Save to test.cpp and compile: cl.exe /EHsc test.cpp To test with your OCX you'll need to either #import the typelib and use it's CLSID (or just hard-code the CLSID) in the CoCreateInstance call. Using #import will also help define any custom interfaces you might need. #include "windows.h" #include "shobjidl.h" #include "atlbase.h" // // compile with: cl /EHsc test.cpp // // A fun little program to demonstrate creating an OCX. // (CLSID_TaskbarList in this case) // BOOL CALLBACK RemoveFromTaskbarProc( HWND hwnd, LPARAM lParam ) { ITaskbarList* ptbl = (ITaskbarList*)lParam; ptbl->DeleteTab(hwnd); return TRUE; } void HideTaskWindows(ITaskbarList* ptbl) { EnumWindows( RemoveFromTaskbarProc, (LPARAM) ptbl); } // ============ BOOL CALLBACK AddToTaskbarProc( HWND hwnd, LPARAM lParam ) { ITaskbarList* ptbl = (ITaskbarList*)lParam; ptbl->AddTab(hwnd); return TRUE;// continue enumerating } void ShowTaskWindows(ITaskbarList* ptbl) { if (!EnumWindows( AddToTaskbarProc, (LPARAM) ptbl)) throw "Unable to enum windows in ShowTaskWindows"; } // ============ int main(int, char**) { CoInitialize(0); try { CComPtr<IUnknown> pUnk; if (FAILED(CoCreateInstance(CLSID_TaskbarList, NULL, CLSCTX_INPROC_SERVER|CLSCTX_LOCAL_SERVER, IID_IUnknown, (void**) &pUnk))) throw "Unabled to create CLSID_TaskbarList"; // Do something with the object... CComQIPtr<ITaskbarList> ptbl = pUnk; if (ptbl) ptbl->HrInit(); HideTaskWindows(ptbl); MessageBox( GetDesktopWindow(), _T("Check out the task bar!"), _T("StackOverflow FTW"), MB_OK); ShowTaskWindows(ptbl); } catch( TCHAR * msg ) { MessageBox( GetDesktopWindow(), msg, _T("Error"), MB_OK); } CoUninitialize(); return 0; }
17,944
17,974
How to round up the result of integer division?
I'm thinking in particular of how to display pagination controls, when using a language such as C# or Java. If I have x items which I want to display in chunks of y per page, how many pages will be needed?
Found an elegant solution: int pageCount = (records + recordsPerPage - 1) / recordsPerPage; Source: Number Conversion, Roland Backhouse, 2001
18,765
18,786
Importing C++ enumerations into C#
I'm currently working on creating a new C# project that needs to interact with an older C++ application. There is an error enumeration that already exists in the C++ app that I need to use in the C# app. I don't want to just re declare the enumeration in C# because that could cause sync issues down the line if the files aren't updated together. All that being said my question is this: Is there a way for me to taken an enumeration declared like so: typedef enum { eDEVICEINT_ERR_FATAL = 0x10001 ... } eDeviceIntErrCodes; and use it in a C# program like so: eDeviceIntErrCodes.eDEVICEINT_ERR_FATAL
Check out the PInvoke Interop Assistant tool http://www.codeplex.com/clrinterop/Release/ProjectReleases.aspx?ReleaseId=14120. Its a useful tool for generating PInvoke signatures for native methods. If I feed it your enum it generates this code. There is a command line version of the tool included so you could potentially build an automated process to keep the C# definition of the enum up to date whenever the C++ version changes. public enum eDeviceIntErrCodes { /// eDEVICEINT_ERR_FATAL -> 0x10001 eDEVICEINT_ERR_FATAL = 65537, }
18,959
18,975
How to match linux device path to windows drive name?
I'm writing an application that on some stage performs low-level disk operations in Linux environment. The app actually consists of 2 parts, one runs on Windows and interacts with a user and another is a linux part that runs from a LiveCD. User makes a choice of Windows drive letters and then a linux part performs actions with corresponding partitions. The problem is finding a match between a Windows drive letter (like C:) and a linux device name (like /dev/sda1). This is my current solution that I rate as ugly: store partitions information (i.e. drive letter, number of blocks, drive serial number etc.) in Windows in some pre-defined place (i.e. the root of the system partition). read a list of partitions from /proc/partitions. Get only those partitions that has major number for SCSI or IDE hard drives and minor number that identifies them as real partitions and not the whole disks. Try to mount each of them with either ntfs or vfat file systems. Check whether the mounted partition contains the information stored by Windows app. Upon finding the required information written by the Windows app make the actual match. For each partition found in /proc/partitions acquire drive serial number (via HDIO_GET_IDENTITY syscall), number of blocks (from /proc/partitions) and drive offset (/sys/blocks/drive_path/partition_name/start), compare this to the Windows information and if this matches - store a Windows drive letter along with a linux device name. There are a couple of problems in this scheme: This is ugly. Writing data in Windows and then reading it in Linux makes testing a nightmare. linux device major number is compared only with IDE or SCSI devices. This would probably fail, i.e. on USB or FireWire disks. It's possible to add these types of disks, but limiting the app to only known subset of possible devices seems to be rather bad idea. looks like HDIO_GET_IDENTITY works only on IDE and SATA drives. /sys/block hack may not work on other than IDE or SATA drives. Any ideas on how to improve this schema? Perhaps there is another way to determine windows names without writing all the data in windows app? P.S. The language of the app is C++. I can't change this.
Partitions have UUIDs associated with them. I don't know how to find these in Windows but in linux you can find the UUID for each partition with: sudo vol_id -u device (e.g. /dev/sda1) If there is an equivilent function in Windows you could simply store the UUIDs for whatever partition they pick then iterate through all known partitions in linux and match the UUIDs. Edit: This may be a linux-only thing, and it may speficially be the volid util that generates these from something (instead of reading off meta-data for the drive). Having said that, there is nothing stopping you getting the source for volid and checking out what it does.
19,193
19,255
How to make a tree in C++?
How do I make a tree data structure in C++ that uses iterators instead of pointers? I couldn't find anything in the STL that can do this. What I would like to do is to be able to create and manipulate trees like this: #include <iostream> #include <tree> using namespace std; int main() { tree<int> myTree; tree<int>::iterator i = myTree.root(); *i = 42; tree<int>::iterator j = i.add_child(); *j = 777; j = j.parent(); if (i == myTree.root() && i == j) cout << "i and j are both pointing to the root\n"; return 0; } Thank you, tree.hh seems to be just what I was looking for. If this is for gaining the benefit of a data-structure holding arbitrary index types, optimized for searching and good at insertion then consider using a map. A map is an associative container that has performance guarantees identical to those of a tree: logarithmic searching, logarithmic insertion, logarithmic deletion, linear space. Internally they are often implemented as red-black trees, although that is not a guarantee. Still, as an STL user all you should care about is the performance guarantees of the STL algorithms and data-structures. Whether they're implemented as trees or little green men shouldn't matter to you. I'm not sure if a map is what I need, but thanks for the info. I will remember to use maps whenever possible instead of implementing trees.
Here is tree.hh which is a bit close to what you want to do, though a bit different. Here is a piece of code extracted from its website. int main(int, char **) { tree<string> tr; tree<string>::iterator top, one, two, loc, banana; top=tr.begin(); one=tr.insert(top, "one"); two=tr.append_child(one, "two"); tr.append_child(two, "apple"); banana=tr.append_child(two, "banana"); tr.append_child(banana,"cherry"); tr.append_child(two, "peach"); tr.append_child(one,"three"); loc=find(tr.begin(), tr.end(), "two"); if(loc!=tr.end()) { tree<string>::sibling_iterator sib=tr.begin(loc); while(sib!=tr.end(loc)) { cout << (*sib) << endl; ++sib; } cout << endl; tree<string>::iterator sib2=tr.begin(loc); tree<string>::iterator end2=tr.end(loc); while(sib2!=end2) { for(int i=0; i<tr.depth(sib2)-2; ++i) cout << " "; cout << (*sib2) << endl; ++sib2; } } } Now what's different? Your implementation is simpler when it comes to append a node to the tree. Though your version is indiscutably simpler, the dev of this lib probably wanted to have some info accessible without browsing the tree, such as the size of the tree for instance. I also assume he didn't want to store the root on all nodes for performance reason. So if you want to implement it your way, I suggest you keep most of the logic and add the link to the parent tree in the iterator and rewrite append a bit.
19,347
19,372
What is the best way to go from Java/C# to C++?
At my university most of my classes have been in Java. I have also recently learned C# (and the Visual Studio environment) at a summer internship. Now I'm taking an Intro to Computer Graphics class and the grad student teaching the class prefers us to use C++ to access the OpenGL bindings via GLUT. Does anyone have any good resources on how to make a good transition from Java/C# to C++? Obviously pointers are going to be a big issue, but any other things I should be looking out for? Any tutorials, guides, etc. would be very helpful! Thanks!
Yeah, I got bit by the same bug. The university tended to lean on Java, and then allowed you to choose the language you wanted to work with during projects. The best way is to just jump in. Start small, take baby steps, and just Google things that confuse you when you get there. Also find projects that have released their source code. See how they structure their programs. Basically, just tinker with concepts. There is plenty of information around the web. Make it fun and grab a C++ game development book so it doesn't become mind numbing too quickly. Here's some places that I found useful while learning http://www.cprogramming.com/ http://www.wikipedia.com http://www.cplusplus.com/
20,731
20,792
How do you clear a stringstream variable?
I've tried several things already, std::stringstream m; m.empty(); m.clear(); both of which don't work.
For all the standard library types the member function empty() is a query, not a command, i.e. it means "are you empty?" not "please throw away your contents". The clear() member function is inherited from ios and is used to clear the error state of the stream, e.g. if a file stream has the error state set to eofbit (end-of-file), then calling clear() will set the error state back to goodbit (no error). For clearing the contents of a stringstream, using: m.str(""); is correct, although using: m.str(std::string()); is technically more efficient, because you avoid invoking the std::string constructor that takes const char*. But any compiler these days should be able to generate the same code in both cases - so I would just go with whatever is more readable.
20,787
20,813
When to use STL bitsets instead of separate variables?
In what situation would it be more appropriate for me to use a bitset (STL container) to manage a set of flags rather than having them declared as a number of separate (bool) variables? Will I get a significant performance gain if I used a bitset for 50 flags rather than using 50 separate bool variables?
Well, 50 bools as a bitset will take 7 bytes, while 50 bools as bools will take 50 bytes. These days that's not really a big deal, so using bools is probably fine. However, one place a bitset might be useful is if you need to pass those bools around a lot, especially if you need to return the set from a function. Using a bitset you have less data that has to be moved around on the stack for returns. Then again, you could just use refs instead and have even less data to pass around. :)
21,303
21,322
In C++ can constructor and destructor be inline functions?
VC++ makes functions which are implemented within the class declaration inline functions. If I declare a class Foo as follows, then are the CONSTRUCTOR and DESTRUCTOR inline functions? class Foo { int* p; public: Foo() { p = new char[0x00100000]; } ~Foo() { delete [] p; } }; { Foo f; (f); }
Defining the body of the constructor INSIDE the class has the same effect as placing the function OUTSIDE the class with the "inline" keyword. In both cases it's a hint to the compiler. An "inline" function doesn't necessarily mean the function will be inlined. That depends on the complexity of the function and other rules.
21,558
21,607
In C++, what is a virtual base class?
I want to know what a "virtual base class" is and what it means. Let me show an example: class Foo { public: void DoSomething() { /* ... */ } }; class Bar : public virtual Foo { public: void DoSpecific() { /* ... */ } };
Virtual base classes, used in virtual inheritance, is a way of preventing multiple "instances" of a given class appearing in an inheritance hierarchy when using multiple inheritance. Consider the following scenario: class A { public: void Foo() {} }; class B : public A {}; class C : public A {}; class D : public B, public C {}; The above class hierarchy results in the "dreaded diamond" which looks like this: A / \ B C \ / D An instance of D will be made up of B, which includes A, and C which also includes A. So you have two "instances" (for want of a better expression) of A. When you have this scenario, you have the possibility of ambiguity. What happens when you do this: D d; d.Foo(); // is this B's Foo() or C's Foo() ?? Virtual inheritance is there to solve this problem. When you specify virtual when inheriting your classes, you're telling the compiler that you only want a single instance. class A { public: void Foo() {} }; class B : public virtual A {}; class C : public virtual A {}; class D : public B, public C {}; This means that there is only one "instance" of A included in the hierarchy. Hence D d; d.Foo(); // no longer ambiguous This is a mini summary. For more information, have a read of this and this. A good example is also available here.
21,647
21,656
Reading from text file until EOF repeats last line
The following C++ code uses a ifstream object to read integers from a text file (which has one number per line) until it hits EOF. Why does it read the integer on the last line twice? How to fix this? Code: #include <iostream> #include <fstream> using namespace std; int main() { ifstream iFile("input.txt"); // input.txt has integers, one per line while (!iFile.eof()) { int x; iFile >> x; cerr << x << endl; } return 0; } input.txt: 10 20 30 Output: 10 20 30 30 Note: I've skipped all error checking code to keep the code snippet small. The above behaviour is seen on Windows (Visual C++), cygwin (gcc) and Linux (gcc).
Just follow closely the chain of events. Grab 10 Grab 20 Grab 30 Grab EOF Look at the second-to-last iteration. You grabbed 30, then carried on to check for EOF. You haven't reached EOF because the EOF mark hasn't been read yet ("binarically" speaking, its conceptual location is just after the 30 line). Therefore you carry on to the next iteration. x is still 30 from previous iteration. Now you read from the stream and you get EOF. x remains 30 and the ios::eofbit is raised. You output to stderr x (which is 30, just like in the previous iteration). Next you check for EOF in the loop condition, and this time you're out of the loop. Try this: while (true) { int x; iFile >> x; if( iFile.eof() ) break; cerr << x << endl; } By the way, there is another bug in your code. Did you ever try to run it on an empty file? The behaviour you get is for the exact same reason.
22,239
22,262
Why does int main() {} compile?
(I'm using Visual C++ 2008) I've always heard that main() is required to return an integer, but here I didn't put in return 0; and and it compiled with 0 errors and 0 warnings! In the debug window it says the program has exited with code 0. If this function is named anything other than main(), the compiler complains saying 'blah' must return a value. Sticking a return; also causes the error to appear. But leaving it out completely, it compiles just fine. #include <iostream> using namespace std; int main() { cout << "Hey look I'm supposed to return an int but I'm not gonna!\n"; } Could this be a bug in VC++?
3.6.1 Main function .... 2 An implementation shall not predefine the main function. This function shall not be overloaded. It shall have a return type of type int, but otherwise its type is implementation-defined. All implementations shall allow both of the following definitions of main: int main() { /* ... */ } and int main(int argc, char* argv[]) { /* ... */ } .... and it continues to add ... 5 A return statement in main has the effect of leaving the main function (destroying any objects with automatic storage duration) and calling exit with the return value as the argument. If control reaches the end of main without encountering a return statement, the effect is that of executing return 0; attempting to find an online copy of the C++ standard so I could quote this passage I found a blog post that quotes all the right bits better than I could.
22,379
22,399
Implementing a log watcher
I'm wondering how you can implement a program similar to tail -f in C/C++, a program that watches for and processes new lines added to a log file?
You can use fseek() to clear the eof condition on the stream. Essentially, read to the end of the file, sleep for a while, fseek() (without changing your position) to clear eof, the read to end of file again. wash, rinse, repeat. man fseek(3) for details. Here's what it looks like in perl. perl's seek() is essentially a wrapper for fseek(3), so the logic is the same: wembley 0 /home/jj33/swap >#> cat p my $f = shift; open(I, "<$f") || die "Couldn't open $f: $!\n"; while (1) { seek(I, 0, 1); while (defined(my $l = <I>)) { print "Got: $l"; } print "Hit EOF, sleeping\n"; sleep(10); } wembley 0 /home/jj33/swap >#> cat tfile This is some text in a file wembley 0 /home/jj33/swap >#> perl p tfile Got: This is Got: some Got: text Got: in Got: a file Hit EOF, sleeping Then, in another session: wembley 0 /home/jj33/swap > echo "another line of text" >> tfile And back to the original program output: Hit EOF, sleeping Got: another line of text Hit EOF, sleeping
22,459
22,473
memset() causing data abort
I'm getting some strange, intermittent, data aborts (< 5% of the time) in some of my code, when calling memset(). The problem is that is usually doesn't happen unless the code is running for a couple days, so it's hard to catch it in the act. I'm using the following code: char *msg = (char*)malloc(sizeof(char)*2048); char *temp = (char*)malloc(sizeof(char)*1024); memset(msg, 0, 2048); memset(temp, 0, 1024); char *tempstr = (char*)malloc(sizeof(char)*128); sprintf(temp, "%s %s/%s %s%s", EZMPPOST, EZMPTAG, EZMPVER, TYPETXT, EOL); strcat(msg, temp); //Add Data memset(tempstr, '\0', 128); wcstombs(tempstr, gdevID, wcslen(gdevID)); sprintf(temp, "%s: %s%s", "DeviceID", tempstr, EOL); strcat(msg, temp); As you can see, I'm not trying to use memset with a size larger that what's originally allocated with malloc() Anyone see what might be wrong with this?
malloc can return NULL if no memory is available. You're not checking for that.
23,190
25,002
How Does One Sum Dimensions of an Array Specified at Run-Time?
I am working on a function to establish the entropy of a distribution. It uses a copula, if any are familiar with that. I need to sum up the values in the array based on which dimensions are "cared about." Example: Consider the following example... Dimension 0 (across) _ _ _ _ _ _ _ _ _ _ _ _ _ |_ 0 _|_ 0 _|_ 0 _|_ 2 _| Dimension 1 |_ 1 _|_ 0 _|_ 2 _|_ 0 _| (down) |_ 0 _|_ 3 _|_ 0 _|_ 6 _| |_ 0 _|_ 0 _|_ 0 _|_ 0 _| I "care about" dimension 0 only, and "don't care" about the rest (dim 1). Summing this array with the above specifications will "collapse" the "stacks" of dimension 1 down to a single 4 x 1 array: _ _ _ _ _ _ _ _ _ _ _ _ _ |_ 1 _|_ 3 _|_ 2 _|_ 8 _| This can then be summed, or have any operation performed. I need to do this with an array of 'n' dimensions, which could feasibly be 20. Also, I need to be able to do this, caring about certain dimensions, and collapsing the rest. I am having an especially hard time with this because I cant visualize 20 dimensions :p . If anyone could help me set up some c/c++ code to collapse/sum, I would be very very grateful. Update: Just got home. Here is some info to answer your questions: Sorry for rolling back the edits, I was hoping when I clicked roll-back it would show me the changes so I could see what I messed up, a bit like wikipedia. This wasn't the case, as I found out. @jeff - What doesnt make sense? I am using this great service for (what I think is) a legit reason. I want to get better at my hobby, which is all it is, as I am in high school. Many of my posts regard implementing a genetic algorithm (This post, sparsearray, rank an array, pointer manipulation). I am using a sparse array representation, as it is possible to exceed the number of molecules in the universe using a traditional (dense) array. For now, the implementation of the sparsearray itself doesnt matter a whole lot, as I am working to make it work with a standard array before going to a sparse representation. For those who havent seen my previous questions, I am using a binary search tree as the structure to contain the sparse array points, and a "driver" function to traverse the tree as necessary, returning whatever the function is designed to do. This is flexible, so I can accomodate a lot of different methods of accessing the array. The structure is a hypercube, and the number of dimensions is specified at run time, as well as the length of each dimension (which are all the same, as it is a hypercube). Thanks everyone for your imput.
This could have applications. Lets say you implemented a 2D Conway's Game of Life (which defines a 2D plane, 1 for 'alive', 0 for 'dead') and you stored the Games history for every iteration (which then defines a 3D cube). If you wanted to know how many bacteria there was alive over history, you would use the above algorithm. You could use the same algorithm for a 3D, (and 4D, 5D etc.) version of Game of Life grid. I'd say this was a question for recursion, I'm not yet a C programmer but I know it is possible in C. In python, def iter_arr(array): sum = 0 for i in array: if type(i) == type(list()): sum = sum + iter_arr(i) else: sum = sum + i return sum Iterate over each element in array If element is another array, call the function again If element is not array, add it to the sum Return sum You would then apply this to each element in the 'cared about' dimension. This is easier in python due to duck-typing though ...
23,209
26,180
C++ linker unresolved external symbols
I'm building an application against some legacy, third party libraries, and having problems with the linking stage. I'm trying to compile with Visual Studio 9. My compile command is: cl -DNT40 -DPOMDLL -DCRTAPI1=_cdecl -DCRTAPI2=cdecl -D_WIN32 -DWIN32 -DWIN32_LEAN_AND_MEAN -DWNT -DBYPASS_FLEX -D_INTEL=1 -DIPLIB=none -I. -I"D:\src\include" -I"C:\Program Files\Microsoft Visual Studio 9.0\VC\include" -c -nologo -EHsc -W1 -Ox -Oy- -MD mymain.c The code compiles cleanly. The link command is: link -debug -nologo -machine:IX86 -verbose:lib -subsystem:console mymain.obj wsock32.lib advapi32.lib msvcrt.lib oldnames.lib kernel32.lib winmm.lib [snip large list of dependencies] D:\src\lib\app_main.obj -out:mymain.exe The errors that I'm getting are: app_main.obj : error LNK2019: unresolved external symbol "_\_declspec(dllimport) public: void __thiscall std::locale::facet::_Register(void)" (__imp_?_Register@facet@locale@std@@QAEXXZ) referenced in function "class std::ctype<char> const & __cdecl std::use_facet<class std::ctype<char> (class std::locale const &)" (??$use_facet@V?$ctype@D@std@@@std@@YAABV?$ctype@D@0@ABVlocale@0@@Z) app_main.obj : error LNK2019: unresolved external symbol "__declspec(dllimport) public: static unsigned int __cdecl std::ctype<char>::_Getcat(class std::locale::facet const * *)" (__imp_?_Getcat@?$ctype@D@std@@SAIPAPBVfacet@locale@2@@Z) referenced in function "class std::ctype<char> const & __cdecl std::use_facet<class std::ctype<char> (class std::locale const &)" (??$use_facet@V?$ctype@D@std@@@std@@YAABV?$ctype@D@0@ABVlocale@0@@Z) app_main.obj : error LNK2019: unresolved external symbol "__declspec(dllimport) public: static unsigned int __cdecl std::ctype<unsigned short>::_Getcat(class std::locale::facet const * *)" (__imp_?_Getcat@?$ctype@G@std@@SAIPAPBVfacet@locale@2@@Z) referenced in function "class std::ctype<unsigned short> const & __cdecl std::use_facet<class std::ctype<unsigned short> >(class std::locale const &)" (??$use_facet@V?$ctype@G@std@@@std@@YAABV?$ctype@G@0@ABVlocale@0@@Z) mymain.exe : fatal error LNK1120: 3 unresolved externals Notice that these errors are coming from the legacy code, not my code - app_main.obj is part of the legacy code, while mymain.c is my source. I've done some searching around, and what that I've read says that this type of error is caused by a mismatch with the -MD switch between my code and the library that I'm linking to. Since I'm dealing with legacy code, a solution has to come from my environment. It's been a long time since I've done C++ work, and even longer since I've used Visual Studio, so I'm hoping that this is just some ignorance on my part. Any ideas on how to get these resolved?
After trying to get this stuff to compile under VS 2008, I tried earlier versions of VS - 2005 worked with warnings, and 2003 just worked. I double checked the linkages and couldn't find any problems, so either I just couldn't find it, or that wasn't the problem. So to reiterate, downgrading to VS 2003 fixed it.
24,196
24,223
Simple Frameworks for Displaying Bitmaps and Handling Button Presses
We have a set of applications that basically display a bunch of bitmaps and text, then allow user to press "buttons" (certain bitmaps) that cause actions to occur. We currently have these implemented using DirectX and a bunch of code to place the bitmaps and handle the button-presses. But we'd like to have the following features: portable to Linux some sort of "editor" that would allow us to lay out screens without hard-coding locations of elements in code animation we need to be able to overlay video not resource intensive (these terminals don't have a lot of memory or CPU) we're currently using C++, so management would prefer that, but other languages would be considered We'd prefer a free, open-source solution, but would be willing to buy something if it is not too expensive. (We have a couple dozen developers, and tens of thousands of terminals deployed.) We don't like the common GUI toolkits or widgets. We want something that has more of the look of a game than of a dialog box. Any suggestions for off-the-shelf stuff we could use?
Maybe the way to go is something like Clutter or Allegro. If you check in this article at ArsTechnica what they are using Clutter for, you might get an idea how to use it. I don't know for sure if it works on Windows, but I'm pretty sure it does, considering it only depends on libraries that are supported under Windows.
24,212
30,029
Use QItemDelegate to show image thumbnails
What's the best way to use QT4's QItemDelegate to show thumbnails for images in a view? Specifically, how do you stop the item delegate from blocking when generating pixmaps from very large image files (> 500MB)? Can anyone link to some example code that achieves this? Then again, perhaps this isn't the place to look for Qt-specific code.
You're doing it wrong if you are generating pixmaps inside any of the delegate methods (paint, draw...). Try to generate the thumbnails only once (on worker thread or maybe not even at runtime, if possible) and have the delegate just display them for the appropriate role. If you do it at runtime display a default picture until you have the thumbnail generated (like web browsers do with pictures that are not yet downloaded).
24,542
24,560
Using bitwise operators for Booleans in C++
Is there any reason not to use the bitwise operators &, |, and ^ for "bool" values in C++? I sometimes run into situations where I want exactly one of two conditions to be true (XOR), so I just throw the ^ operator into a conditional expression. I also sometimes want all parts of a condition to be evaluated whether the result is true or not (rather than short-circuiting), so I use & and |. I also need to accumulate Boolean values sometimes, and &= and |= can be quite useful. I've gotten a few raised eyebrows when doing this, but the code is still meaningful and cleaner than it would be otherwise. Is there any reason NOT to use these for bools? Are there any modern compilers that give bad results for this?
|| and && are boolean operators and the built-in ones are guaranteed to return either true or false. Nothing else. |, & and ^ are bitwise operators. When the domain of numbers you operate on is just 1 and 0, then they are exactly the same, but in cases where your booleans are not strictly 1 and 0 – as is the case with the C language – you may end up with some behavior you didn't want. For instance: BOOL two = 2; BOOL one = 1; BOOL and = two & one; //and = 0 BOOL cand = two && one; //cand = 1 In C++, however, the bool type is guaranteed to be only either a true or a false (which convert implicitly to respectively 1 and 0), so it's less of a worry from this stance, but the fact that people aren't used to seeing such things in code makes a good argument for not doing it. Just say b = b && x and be done with it.
24,901
24,904
Is there a performance difference between i++ and ++i in C++?
We have the question is there a performance difference between i++ and ++i in C? What's the answer for C++?
[Executive Summary: Use ++i if you don't have a specific reason to use i++.] For C++, the answer is a bit more complicated. If i is a simple type (not an instance of a C++ class), then the answer given for C ("No there is no performance difference") holds, since the compiler is generating the code. However, if i is an instance of a C++ class, then i++ and ++i are making calls to one of the operator++ functions. Here's a standard pair of these functions: Foo& Foo::operator++() // called for ++i { this->data += 1; return *this; } Foo Foo::operator++(int ignored_dummy_value) // called for i++ { Foo tmp(*this); // variable "tmp" cannot be optimized away by the compiler ++(*this); return tmp; } Since the compiler isn't generating code, but just calling an operator++ function, there is no way to optimize away the tmp variable and its associated copy constructor. If the copy constructor is expensive, then this can have a significant performance impact.
25,461
29,300
interfacing with stdbool.h C++
In a project I am interfacing between C++ and a C library that uses stdbool.h defined as such. #ifndef _STDBOOL_H #define _STDBOOL_H /* C99 Boolean types for compilers without C99 support */ /* http://www.opengroup.org/onlinepubs/009695399/basedefs/stdbool.h.html */ #if !defined(__cplusplus) #if !defined(__GNUC__) /* _Bool builtin type is included in GCC */ typedef enum { _Bool_must_promote_to_int = -1, false = 0, true = 1 } _Bool; #endif #define bool _Bool #define true 1 #define false 0 #define __bool_true_false_are_defined 1 #endif #endif Some structures have bool members. So if I have one of these structures defined as local variables within a C++ function and pass it to a C function the sizes are inconsistent between C++ and C as bool is one bye in C++ and 4 in C. Does anyone have any advice to how to overcome this without resorting to my current solution which is //#define bool _Bool #define bool unsigned char Which is against the C99 standard for stdbool.h
I found the answer to my own question by finding a more compatible implementation of stdbool.h that is compliant with the C99 standard. #ifndef _STDBOOL_H #define _STDBOOL_H #include <stdint.h> /* C99 Boolean types for compilers without C99 support */ /* http://www.opengroup.org/onlinepubs/009695399/basedefs/stdbool.h.html */ #if !defined(__cplusplus) #if !defined(__GNUC__) /* _Bool builtin type is included in GCC */ /* ISO C Standard: 5.2.5 An object declared as type _Bool is large enough to store the values 0 and 1. */ /* We choose 8 bit to match C++ */ /* It must also promote to integer */ typedef int8_t _Bool; #endif /* ISO C Standard: 7.16 Boolean type */ #define bool _Bool #define true 1 #define false 0 #define __bool_true_false_are_defined 1 #endif #endif This is taken from the Ada Class Library project.
25,730
25,831
What is the best free memory leak detector for a C/C++ program and its plug-in DLLs?
I have a .exe and many plug-in .dll modules that the .exe loads. (I have source for both.) A cross-platform (with source) solution would be ideal, but the platform can be narrowed to WinXP and Visual Studio (7.1/2003 in my case). The built-in VS leak detector only gives the line where new/malloc was called from, but I have a wrapper for allocations, so a full symbolic stack trace would be best. The detector would also be able to detect for a leak in both the .exe and its accompanying plug-in .dll modules.
I personally use Visual Leak Detector, though it can cause large delays when large blocks are leaked (it displays the contents of the entire leaked block).
26,098
26,121
Overloaded functions in C++ DLL def file
I'm writing a C/C++ DLL and want to export certain functions which I've done before using a .def file like this LIBRARY "MyLib" EXPORTS Foo Bar with the code defined as this, for example: int Foo(int a); void Bar(int foo); However, what if I want to declare an overloaded method of Foo() like: int Foo(int a, int b); As the def file only has the function name and not the full prototype I can't see how it would handle the overloaded functions. Do you just use the one entry and then specify which overloaded version you want when passing in the properly prototyped function pointer to LoadLibrary() ? Edit: To be clear, this is on Windows using Visual Studio 2005 Edit: Marked the non-def (__declspec) method as the answer...I know this doesn't actually solve the problem using def files as I wanted, but it seems that there is likely no (official) solution using def files. Will leave the question open, however, in case someone knows something we don't have overloaded functions and def files.
In the code itself, mark the functions you want to export using __declspec(dllexport). For example: #define DllExport __declspec(dllexport) int DllExport Foo( int a ) { // implementation } int DllExport Foo( int a, int b ) { // implementation } If you do this, you do not need to list the functions in the .def file. Alternatively, you may be able to use a default parameter value, like: int Foo( int a, int b = -1 ) This assumes that there exists a value for b that you can use to indicate that it is unused. If -1 is a legal value for b, or if there isn't or shouldn't be a default, this won't work. Edit (Adam Haile): Corrected to use __declspec as __dllspec was not correct so I could mark this as the official answer...it was close enough. Edit (Graeme): Oops - thanks for correcting my typo!
26,151
26,162
Template typedefs - What's your work around?
C++ 0x has template aliases (sometimes referred to as template typedefs). See here. Current spec of C++ does not. What do you like to use as work around ? Container objects or Macros ? Do you feel its worth it ?
What do you like to use as work around ? Container objects or Macros ? Do you feel its worth it ? The canonical way is to use a metafunction like thus: template <typename T> struct my_string_map { typedef std::map<std::string, T> type; }; // Invoke: my_string_map<int>::type my_str_int_map; This is also used in the STL (allocator::rebind<U>) and in many libraries including Boost. We use it extensively in a bioinformatical library. It's bloated, but it's the best alternative 99% of the time. Using macros here is not worth the many downsides. (EDIT: I've amended the code to reflect Boost/STL conventions as pointed out by Daniel in his comment.)
27,247
27,276
Could you recommend some guides about Epoll on Linux
I need to know about Epoll On linux System. Could you recommend manual or guides about epoll library? need more detailed guides. it's better to have some examples. help me. and Thank you for reading.
Here's an introduction to Epoll, a pretty basic tutorial: http://blog.kovyrin.net/2006/04/13/epoll-asynchronous-network-programming/ A more complete example can be found here: https://banu.com/blog/2/how-to-use-epoll-a-complete-example-in-c/ Also, the man pages
27,492
27,501
C++ Memory management
I've learned in College that you always have to free your unused Objects but not how you actually do it. For example structuring your code right and so on. Are there any general rules on how to handle pointers in C++? I'm currently not allowed to use boost. I have to stick to pure c++ because the framework I'm using forbids any use of generics.
I have worked with the embedded Symbian OS, which had an excellent system in place for this, based entirely on developer conventions. Only one object will ever own a pointer. By default this is the creator. Ownership can be passed on. To indicate passing of ownership, the object is passed as a pointer in the method signature (e.g. void Foo(Bar *zonk);). The owner will decide when to delete the object. To pass an object to a method just for use, the object is passed as a reference in the method signature (e.g. void Foo(Bat &zonk);). Non-owner classes may store references (never pointers) to objects they are given only when they can be certain that the owner will not destroy it during use. Basically, if a class simply uses something, it uses a reference. If a class owns something, it uses a pointer. This worked beautifully and was a pleasure to use. Memory issues were very rare.
27,640
27,811
Integrating Qt into legacy MFC applications
We currently maintain a suit of MFC applications that are fairly well designed, however the user interface is beginning to look tired and a lot of the code is in need quite a bit of refactoring to tidy up some duplication and/or performance problems. We make use of quite a few custom controls that handle all their own drawing (all written using MFC). Recently I've been doing more research into Qt and the benefits it provides (cross-platform and supports what you might call a more "professional" looking framework for UI development). My question is - what would be the best approach to perhaps moving to the Qt framework? Does Qt play nice with MFC? Would it be better to start porting some of our custom controls to Qt and gradually integrate more and more into our existing MFC apps? (is this possible?). Any advice or previous experience is appreciated.
In my company, we are currently using Qt and are very happy with it. I personnally never had to move a MFC-app into using the Qt framework, but here is something which might be of some interest for you : Qt/MFC Migration Framework Qt/MFC Migration Framework It's part of Qt-Solutions, so this means you'll have to buy a Qt license along with a Qt-Solutions license. (edit: not any more) I hope this helps !
27,700
27,749
C++ : Opening a file in non exclusive mode
I have to develop an application which parses a log file and sends specific data to a server. It has to run on both Linux and Windows. The problem appears when I want to test the log rolling system (which appends .1 to the name of the creates a new one with the same name). On Windows (haven't tested yet on Linux) I can't rename a file that I have opened with std::ifstream() (exclusive access?) even if I open it in "input mode" (ios::in). Is there a cross-platform way to open file in a non-exclusive way?
Is there a way to open file in a non-exclusive way, Yes, using Win32, passing the various FILE_SHARE_Xxxx flags to CreateFile. is it cross platform? No, it requires platform-specific code. Due to annoying backwards compatibility concerns (DOS applications, being single-tasking, assume that nothing can delete a file out from under them, i.e. that they can fclose() and then fopen() without anything going amiss; Win16 preserved this assumption to make porting DOS applications easier, Win32 preserved this assumption to make porting Win16 applications easier, and it's awful), Windows defaults to opening files exclusively. The underlying OS infrastructure supports deleting/renaming open files (although I believe it does have the restriction that memory-mapped files cannot be deleted, which I think isn't a restriction found on *nix), but the default opening semantics do not. C++ has no notion of any of this; the C++ operating environment is much the same as the DOS operating environment--no other applications running concurrently, so no need to control file sharing.
27,779
29,443
MFC resources / links
I am about to reenter the MFC world after years away for a new job. What resources to people recommend for refreshing the memory? I have been doing mainly C# recently. Also any MFC centric websites or blogs that people recommend?
For blogs: Your best bet would be the Visual C++ Team Blog. For books: Programming Windows with MFC is one of the best book on the subject. For tutorials: Simply search google for various tutorials on MFC.
27,998
28,289
Exporting a C++ class from a DLL
Most of my C/C++ development involves monolithic module files and absolutely no classes whatsoever, so usually when I need to make a DLL with accessible functions I just export them using the standard __declspec(dllexport) directive. Then access them either dynamically via LoadLibrary() or at compile time with a header and lib file. How do you do this when you want to export an entire class (and all it's public methods and properties)? Is it possible to dynamically load that class at runtime and if so, how? How would you do it with a header and lib for compile time linking?
What about late-binding? As in loading it with LoadLibrary() and GetProcAddress() ? I'm used being able to load the library at run time and it would be great if you could do that here. So there are two ways to load the DLL. The first is to reference one or more symbols from the DLL (your classname, for example), supply an appropriate import .LIB and let the linker figure everything out. The second is to explicitly load the DLL via LoadLibrary. Either approach works fine for C-level function exports. You can either let the linker handle it or call GetProcAddress as you noted. But when it comes to exported classes, typically only the first approach is used, i.e., implicitly link to the DLL. In this case the DLL is loaded at application start time, and the application fails to load if the DLL can't be found. If you want to link to a class defined in a DLL, and you want that DLL to be loaded dynamically, sometime after program initiation, you have two options: Create objects of the class using a special factory function, which internally will have to use (a tiny bit of) assembler to "hook up" newly created objects to their appropriate offsets. This has to be done at run-time AFTER the DLL has been loaded, obviously. A good explanation of this approach can be found here. Use a delay-load DLL. All things considered... probably better to just go with implicit linking, in which case you definitely want to use the preprocessor technique shown above. In fact, if you create a new DLL in Visual Studio and choose the "export symbols" option these macros will be created for you. Good luck...
28,002
1,255,015
Regular cast vs. static_cast vs. dynamic_cast
I've been writing C and C++ code for almost twenty years, but there's one aspect of these languages that I've never really understood. I've obviously used regular casts i.e. MyClass *m = (MyClass *)ptr; all over the place, but there seem to be two other types of casts, and I don't know the difference. What's the difference between the following lines of code? MyClass *m = (MyClass *)ptr; MyClass *m = static_cast<MyClass *>(ptr); MyClass *m = dynamic_cast<MyClass *>(ptr);
static_cast static_cast is used for cases where you basically want to reverse an implicit conversion, with a few restrictions and additions. static_cast performs no runtime checks. This should be used if you know that you refer to an object of a specific type, and thus a check would be unnecessary. Example: void func(void *data) { // Conversion from MyClass* -> void* is implicit MyClass *c = static_cast<MyClass*>(data); ... } int main() { MyClass c; start_thread(&func, &c) // func(&c) will be called .join(); } In this example, you know that you passed a MyClass object, and thus there isn't any need for a runtime check to ensure this. dynamic_cast dynamic_cast is useful when you don't know what the dynamic type of the object is. It returns a null pointer if the object referred to doesn't contain the type casted to as a base class (when you cast to a reference, a bad_cast exception is thrown in that case). if (JumpStm *j = dynamic_cast<JumpStm*>(&stm)) { ... } else if (ExprStm *e = dynamic_cast<ExprStm*>(&stm)) { ... } You can not use dynamic_cast for downcast (casting to a derived class) if the argument type is not polymorphic. For example, the following code is not valid, because Base doesn't contain any virtual function: struct Base { }; struct Derived : Base { }; int main() { Derived d; Base *b = &d; dynamic_cast<Derived*>(b); // Invalid } An "up-cast" (cast to the base class) is always valid with both static_cast and dynamic_cast, and also without any cast, as an "up-cast" is an implicit conversion (assuming the base class is accessible, i.e. it's a public inheritance). Regular Cast These casts are also called C-style cast. A C-style cast is basically identical to trying out a range of sequences of C++ casts, and taking the first C++ cast that works, without ever considering dynamic_cast. Needless to say, this is much more powerful as it combines all of const_cast, static_cast and reinterpret_cast, but it's also unsafe, because it does not use dynamic_cast. In addition, C-style casts not only allow you to do this, but they also allow you to safely cast to a private base-class, while the "equivalent" static_cast sequence would give you a compile-time error for that. Some people prefer C-style casts because of their brevity. I use them for numeric casts only, and use the appropriate C++ casts when user defined types are involved, as they provide stricter checking.
28,124
28,130
calculating user defined formulas (with c++)
We would like to have user defined formulas in our c++ program. e.g. The value v = x + ( y - (z - 2)) / 2. Later in the program the user would define x,y and z -> the program should return the result of the calculation. Somewhen later the formula may get changed, so the next time the program should parse the formula and add the new values. Any ideas / hints how to do something like this ? So far I just came to the solution to write a parser to calculate these formulas - maybe any ideas about that ?
If it will be used frequently and if it will be extended in the future, I would almost recommend adding either Python or Lua into your code. Lua is a very lightweight scripting language which you can hook into and provide new functions, operators etc. If you want to do more robust and complicated things, use Python instead.
28,160
28,215
Multiple classes in a header file vs. a single header file per class
For whatever reason, our company has a coding guideline that states: Each class shall have it's own header and implementation file. So if we wrote a class called MyString we would need an associated MyStringh.h and MyString.cxx. Does anyone else do this? Has anyone seen any compiling performance repercussions as a result? Does 5000 classes in 10000 files compile just as quickly as 5000 classes in 2500 files? If not, is the difference noticeable? [We code C++ and use GCC 3.4.4 as our everyday compiler]
The term here is translation unit and you really want to (if possible) have one class per translation unit ie, one class implementation per .cpp file, with a corresponding .h file of the same name. It's usually more efficient (from a compile/link) standpoint to do things this way, especially if you're doing things like incremental link and so forth. The idea being, translation units are isolated such that, when one translation unit changes, you don't have to rebuild a lot of stuff, as you would have to if you started lumping many abstractions into a single translation unit. Also you'll find many errors/diagnostics are reported via file name ("Error in Myclass.cpp, line 22") and it helps if there's a one-to-one correspondence between files and classes. (Or I suppose you could call it a 2 to 1 correspondence).
28,212
28,244
How can I convert types in C++?
I'm using two different libraries in my project, and both of them supply a basic rectangle struct. The problem with this is that there seems to be no way to insert a conversion between the types, so I can't call a function in one library with the result from a function in the other. If I was the author of either of these, I could create conversions, from the outside, I can't. library a: typedef struct rectangle { sint16 x; sint16 y; uint16 w; uint16 h; } rectangle; library b: class Rect { int x; int y; int width; int height; /* ... */ }; Now, I can't make a converter class, because C++ will only look for a conversion in one step. This is probably a good thing, because there would be a lot of possibilities involving creating new objects of all kinds of types. I can't make an operator that takes the struct from a and supplies an object of the class from b: foo.cpp:123 error: ‘operator b::Rect(const rectangle&)’ must be a nonstatic member function So, is there a sensible way around this? edit: I should perhaps also point out that I'd really like some solution that makes working with the result seamless, since I don't expect to be that coder. (Though I agree, old-school, explicit, conversion would have been a good choice. The other branch, reinterpret_cast has the same problem..) edit2: Actually, none of the suggestions really answer my actual question, Konrad Rudolph seems to be correct. C++ actually can't do this. Sucks, but true. (If it makes any difference, I'm going to try subclassing as suggested by CodingTheWheel.
If you can't modify the structures then you have no alternative to writing a manual conversion function because overloading conversion operators only works within the class body. There's no other way.
29,242
29,395
Off-the-Shelf C++ Hex Dump Code
I work a lot with network and serial communications software, so it is often necessary for me to have code to display or log hex dumps of data packets. Every time I do this, I write yet another hex-dump routine from scratch. I'm about to do so again, but figured I'd ask here: Is there any good free hex dump code for C++ out there somewhere? Features I'd like: N bytes per line (where N is somehow configurable) optional ASCII/UTF8 dump alongside the hex configurable indentation, per-line prefixes, per-line suffixes, etc. minimal dependencies (ideally, I'd like the code to all be in a header file, or be a snippet I can just paste in) Edit: Clarification: I am looking for code that I can easily drop in to my own programs to write to stderr, stdout, log files, or other such output streams. I'm not looking for a command-line hex dump utility.
The unix tool xxd is distributed as part of vim, and according to http://www.vmunix.com/vim/util.html#xxd, the source for xxd is ftp://ftp.uni-erlangen.de:21/pub/utilities/etc/xxd-1.10.tar.gz. It was written in C and is about 721 lines. The only licensing information given for it is this: * Distribute freely and credit me, * make money and share with me, * lose money and don't ask me. The unix tool hexdump is available from http://gd.tuwien.ac.at/softeng/Aegis/hexdump.html. It was written in C and can be compiled from source. It's quite a bit bigger than xxd, and is distributed under the GPL.
29,383
29,571
Converting bool to text in C++
Maybe this is a dumb question, but is there any way to convert a boolean value to a string such that 1 turns to "true" and 0 turns to "false"? I could just use an if statement, but it would be nice to know if there is a way to do that with the language or standard libraries. Plus, I'm a pedant. :)
How about using the C++ language itself? bool t = true; bool f = false; std::cout << std::noboolalpha << t << " == " << std::boolalpha << t << std::endl; std::cout << std::noboolalpha << f << " == " << std::boolalpha << f << std::endl; UPDATE: If you want more than 4 lines of code without any console output, please go to cppreference.com's page talking about std::boolalpha and std::noboolalpha which shows you the console output and explains more about the API. Additionally using std::boolalpha will modify the global state of std::cout, you may want to restore the original behavior go here for more info on restoring the state of std::cout.
29,890
29,916
How to get your own (local) IP-Address from an udp-socket (C/C++)
You have multiple network adapters. Bind a UDP socket to an local port, without specifying an address. Receive packets on one of the adapters. How do you get the local ip address of the adapter which received the packet? The question is, "What is the ip address from the receiver adapter?" not the address from the sender which we get in the receive_from( ..., &senderAddr, ... ); call.
You could enumerate all the network adapters, get their IP addresses and compare the part covered by the subnet mask with the sender's address. Like: IPAddress FindLocalIPAddressOfIncomingPacket( senderAddr ) { foreach( adapter in EnumAllNetworkAdapters() ) { adapterSubnet = adapter.subnetmask & adapter.ipaddress; senderSubnet = adapter.subnetmask & senderAddr; if( adapterSubnet == senderSubnet ) { return adapter.ipaddress; } } }
30,099
30,186
C++ - What does "Stack automatic" mean?
In my browsings amongst the Internet, I came across this post, which includes this "(Well written) C++ goes to great lengths to make stack automatic objects work "just like" primitives, as reflected in Stroustrup's advice to "do as the ints do". This requires a much greater adherence to the principles of Object Oriented development: your class isn't right until it "works like" an int, following the "Rule of Three" that guarantees it can (just like an int) be created, copied, and correctly destroyed as a stack automatic." I've done a little C, and C++ code, but just in passing, never anything serious, but I'm just curious, what it means exactly? Can someone give an example?
Stack objects are handled automatically by the compiler. When the scope is left, it is deleted. { obj a; } // a is destroyed here When you do the same with a 'newed' object you get a memory leak : { obj* b = new obj; } b is not destroyed, so we lost the ability to reclaim the memory b owns. And maybe worse, the object cannot clean itself up. In C the following is common : { FILE* pF = fopen( ... ); // ... do sth with pF fclose( pF ); } In C++ we write this : { std::fstream f( ... ); // do sth with f } // here f gets auto magically destroyed and the destructor frees the file When we forget to call fclose in the C sample the file is not closed and may not be used by other programs. (e.g. it cannot be deleted). Another example, demonstrating the object string, which can be constructed, assigned to and which is destroyed on exiting the scope. { string v( "bob" ); string k; v = k // v now contains "bob" } // v + k are destroyed here, and any memory used by v + k is freed
30,373
30,420
What C++ pitfalls should I avoid?
I remember first learning about vectors in the STL and after some time, I wanted to use a vector of bools for one of my projects. After seeing some strange behavior and doing some research, I learned that a vector of bools is not really a vector of bools. Are there any other common pitfalls to avoid in C++?
A short list might be: Avoid memory leaks through use shared pointers to manage memory allocation and cleanup Use the Resource Acquisition Is Initialization (RAII) idiom to manage resource cleanup - especially in the presence of exceptions Avoid calling virtual functions in constructors Employ minimalist coding techniques where possible - for example, declaring variables only when needed, scoping variables, and early-out design where possible. Truly understand the exception handling in your code - both with regard to exceptions you throw, as well as ones thrown by classes you may be using indirectly. This is especially important in the presence of templates. RAII, shared pointers and minimalist coding are of course not specific to C++, but they help avoid problems that do frequently crop up when developing in the language. Some excellent books on this subject are: Effective C++ - Scott Meyers More Effective C++ - Scott Meyers C++ Coding Standards - Sutter & Alexandrescu C++ FAQs - Cline Reading these books has helped me more than anything else to avoid the kind of pitfalls you are asking about.
30,521
30,597
Qt Child Window Placement
Is there a way to specify a child's initial window position in Qt? I have an application that runs on Linux and Windows and it looks like the default behavior of Qt lets the Window Manager determine the placement of the child windows. On Windows, this is in the center of the screen the parent is on which seems reasonable. On Linux, in GNOME (metacity) it is always in the upper left-hand corner which is annoying. I can't find any window manager preferences for metacity that allow me to control window placement so I would like to override that behavior.
Qt Widget Geometry Call the move(x, y) method on the child window before show(). The default values for x and y are 0 so that's why it appears in the upper left-hand corner. You can also use the position of the parent window to compute a relative position for the child.
End of preview. Expand in Data Studio
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

ProCQA

Dataset by jordane95

Github Repo

Downloads last month
95

Models trained or fine-tuned on Jccc-l/procqa_cpp