Posts tagged ‘computer science’
How to swap two integers in C++
Warning: this post got embarrassingly long. For the short version, read up through the paragraph that starts with “If you use.”
This is something that’s been bugging me for a while, and I’d like to lay it to rest. If you’ve done any work in C++, you’ve probably heard the riddle about how to swap two integers without using a temporary variable. The question is exactly what it sounds like: you have two integers (call them x
and y
), you want to swap their values, and you may or may not want to use a third, temporary variable while doing it. Here are some ways of swapping integers that I have actually seen in professional code written by professional coders:
Method Name | Code |
---|---|
Method A |
{ // Limit the scope of t int t = x; x = y; y = t; } |
Method B |
x ^= y; y ^= x; x ^= y; |
Method C |
x ^= y ^= x ^= y; |
Method D |
x = x + y; y = x - y; x = x - y; |
Method E |
std::swap(x, y); |
Which of these is the fastest way to swap integers? Which way is the fastest method that doesn’t require extra memory? When will you want to use a different method? What is the overall best method?
Try to come up with some answers before reading further.
When you’ve decided on your answers, let’s take a closer look. No, on second thought, this got really long. For those of you who don’t want to read all the way to the end, let me just get my main point out of the way. After that, we’ll take a closer look. If you don’t want to spoil the ending, skip the next paragraph.
If you use this link you will spoil the ending, so skip to the next paragraph after this one. →
Tell me about your ideal programming language.
Suppose you could create the specification for your ideal programming language. It must be possible to implement (it can’t solve undecidable problems, etc), but other than that the sky’s the limit. What would you put in it (what kind of syntax, intrinsics, etc)? For what sort of applications would it be used? What other features would it have? Would the interpreter/compiler do anything unusual? Tell me anything you want about your ideal language and the tools that go with it. You’re welcome to answer some of these questions if you don’t have answers to all of them.
I’ve been asking variations of this question to all sorts of people, and it’s fascinating seeing where they agree and where they don’t.
An interesting algorithm
So, even before I get to the what the algorithm does (which is quite interesting on its own, even before you find out how it accomplishes that), the very first intriguing part is its name. It’s officially referred to as the Blum-Floyd-Pratt-Rivest-Tarjan Algorithm. Is your mouth watering yet? I thought so.1 However, CLRS refers to it as the Select Algorithm (see section 9.3), so that’s what I’m going to call it. If you have an unsorted array of n items, it’s a worst-case O(n) algorithm for finding the ith largest item (often the median, since that’s a commonly used statistic). Here’s how the algorithm works:
- Put all the elements of the input into groups of 5 (with one group at the very end that could have less than 5 elements, of course). Find the median of each group.
- Recursively use the Select algorithm to find the median of all the group-of-5 medians from step 1. Call this median of medians M.
- Take all the items from the input and partition them into the group of items less than M and the group of items bigger than M.
- If M is the ith biggest, that’s the answer, and we’re done. If it’s not, you know which partition contains the ith biggest item, and you can recursively call Select on that group (with a different i if the group it’s in is the group of items smaller than M).
To give an example of that last statement, suppose we’re looking for the 65th biggest number from a group of 100 numbers, and we find that M is the 53rd largest. There are 47 numbers smaller than M, and when we recurse, we’ll now be looking for the 12th largest element of that set of 47. If instead M had been the 68th largest number, in our recursive step we’d look at the 67 numbers larger than M and continue to look for the 65th largest one.
Now, I claimed the running time is linear, and I’d better back up that claim. The trick is to consider how many items get thrown out in that last step. Half the group-of-5 medians are larger than M, and half are smaller. Without loss of generality, suppose the ith largest item is in the group that is larger than M. Now, not only do we know that we can henceforth ignore the half of the group-of-5 medians smaller than M, for each of those medians we can ignore the two items in the group of 5 that were smaller than that median. In other words, we can eliminate at least 3/10 of the elements we were searching over (we can eleminate 3/5 of the items in the groups whose median is smaller than M, and half of all the groups fit this category). Sure, I’m eliding an additive constant for the items in the group with M itself (not to mention the items in that last small group at the end when n is not a multiple of 5), but that’s not going to affect the asymptotic running time.
So now we can build the recurrence relation at work here. If T(n) is the amount of work done to perform Select on n items, we have
T(n) = O(n) + T(n/5) + T(7n/10)
That is to say, we do O(n) work in steps 1 and 3, we do T(n/5) work in step 2, and we do T(7n/10) work in step 4 because we know at least 3/10 of the elements are in the wrong partition. I’ll leave it as an exercise to the reader to draw out the recursion tree and find the actual amount of work done, but I assure you T(n) is in O(n).
The crazy part here is that magic number 5: If you do this with 3 items per group, the running time is O(n log n). If you do it with more than 5, the running time is still linear, but slower (you spend more time in step 1 finding all the initial medians). The optimal number of items per group really is 5. Nifty!
[1] For those of you not in the know, these are all really famous algorithms names. Blum is the co-inventor of the CAPTCHA. Floyd is from the Floyd-Warshall all-pairs shortest-paths algorithm, which is a staple in introductory algorithms courses. Pratt is from the Knuth-Morris-Pratt string matching algorithm, which was groundbreaking when it was new but is nearly obsolete now. Rivest is the R in RSA encryption, one of the most widely used encryption schemes around. Tarjan is the co-creator of splay trees, a kind of self-balancing tree with the handy property that commonly accessed items float to the top. Blum, Floyd, Rivest, and Tarjan have each won the Turing Award, which is like the Nobel Prize of computer science.
C++ is bad: problems with the ternary operator, addendum
I’ve found another interesting wrinkle in the ternary operator. So, here’s a bonus question for that quiz from last time. You start with the following code:
class Abstract; class DerivedOne; // Inherits from Abstract class DerivedTwo; // Inherits from Abstract Abstract x; DerivedOne one; DerivedTwo two; bool test;
Again, you can assume that all of these are defined/initialized (and you can assume the comments are correct—both Derived
* classes inherit from Abstract
). Now, consider these snippets:
# | Code Snippet A |
Code Snippet B |
---|---|---|
Bonus |
if (test) x = one; else x = two; |
x = (test ? one : two); |
If you read the previous installment, you’ve probably guessed by now that these are not equivalent, and you’d be right. In what ways do they differ?
C++ is bad: problems with the ternary operator
In today’s installment of “why not to program in C++,” I give you the following quiz, which Dustin, Steve, Tom, and I had to figure out today (Dustin’s code was doing the weirdest things, and we eventually traced it down to this):
Suppose you start out with the following code:
class Argument; Argument x; void Foo(const Argument& arg); bool test;
You can assume that all of these are defined/initialized elsewhere in the code. For each pair of code snippets below, decide whether the two snippets are equivalent to each other.
# | Code Snippet A |
Code Snippet B |
---|---|---|
1 |
if (test) Foo(x); else Foo(Argument()); |
Foo(test ? x : Argument()); |
2 |
{ // limit the scope of y Argument y; Foo(test ? x : y); } |
Foo(test ? x : Argument()); |
3 |
Foo(x); |
Foo(test ? x : x); |
4 |
Foo(x); |
Foo(true ? x : Argument()); |
Edit: what I meant by the curly braces in Question 2 is that you shouldn’t consider “
y
is now a defined variable” to be a significant difference between the two snippets.
An interesting data structure
Admittedly, I can’t think of an uncontrived use case when I would prefer this to a hash table, union find, or traditional bit field, but I still think this is a pretty cool concept. It’s a data structure to hold n items, with constant time initialization (i.e. you can’t initialize all n items at the beginning), and constant time random access. Moreover, it can be used as a set: on top of the constant time random access, you can store and remove items as well as get the number of items in it in constant time, and you can iterate through all the inserted items in O(n) time (where n is the number of items currently stored in the set; not the maximum number of items it can have). and on top of that, if you don’t need to call the items’ destructor (for instance, you’re storing doubles or ints), you can empty the set in constant time! The only drawback to using it as a set is that it takes memory proportional to the size of the keyspace (for instance, if you’re trying to store items indexed by b-bit numbers, it will take O(2^b) space). However, it can be used as a bit vector: to store u bits, it takes a mere O(u) space, and you don’t need to zero all the bits out at the beginning! This implies that if you only end up using a sparse subset of the bits, you don’t have to waste time initializing all the bits you didn’t use. To be fair, the constant hidden behind that O(u) space is bigger than the one in a normal bit vector, but I think it’s still a pretty neat idea. Besides, space is cheap these days; people already often use a byte or four for every bit needed.
Anywho, here’s a great write-up of it, along with the original 1993 paper that introduces it (see the top of page 4 to compare the running times of this to a traditional bit field). On top of what is written in those links, the space required can be cut in half by only having a single array instead of two: just have it point to other parts of itself. Insertion/deletion gets a little more complicated, but it totally works. Neat!
Liar’s Poker: a tricky problem, part 3
I’ve been working off and on to find the optimal strategy for the final round of a game of liar’s poker. I’ve gotten a bit farther on it, and it now looks solveable. I’ve now solved the 12-card deck version (i.e. if the deck only contains aces, kings, and queens), and I suspect I’ve got an algorithm that will solve the whole problem in a reasonable amount of time.
Liar’s Poker: a tricky problem, part 2
I recently wrote about a card game called Liar’s Poker and my endeavours to find an optimal strategy for the final round. Some extra progress has been made, but it’s still a long ways off. Despite what I wrote when I edited my previous post, I’m no longer convinced that we can adapt Simplex or a gradient descent algorithm to solve this. It turns out that although the gradient function is piecewise linear, there are an exponential number of different linear regions in it, and I’m not convinced we can do something like Simplex without enumerating them all. but here’s some other stuff Reid and I have done on the problem:
Liar’s Poker: a tricky problem
Liar’s Poker is a card game we sometimes play around the office. It’s a pretty simple game to learn if you’re already familiar with poker hands: the rules to Liar's Poker, followed by some work figuring out the strategy for the final round →