This is the solution that we propose. We shall put all sampled bits in a single binary variable; each moment at which we sample the binary input, we shall shift all the previously sampled bits one position to the left and put the new value at the LSB position of a temporary byte. Then we shall use the last three sampled bits (i.e. bits laying at three LSB positions) to "vote" the final result. This is what it looks like in practice:
#define votingByte 0b11101000
unsigned char samplingByte;
.
.
.
unsigned char vote23(unsigned char inputByte)
{
// Shift all previously sampled bits one position to the left.
samplingByte <<= 1;
// Copy LSB bit from "inputByte" to LSB position in "samplingByte".
samplingByte |= (inputByte & 1);
if(votingByte & (1 << (samplingByte & 7)))
{
return 1;
}
else
{
return 0;
}
// notes
// In embedded C, bit shifting always adds "0" to vacant bit position.
// This function assumes input bit to be at LSB position in inputByte.
}
Observe that voting is done by picking a bit (out of eight possible) from the specially defined 8-bit constant. In essence, we use the last three sampled bits as the "address" of the bit position inside the constant at which the solution (0 or 1) is written in advance. Since 3 sampled bits can produce 8 unique combinations, a single 8-bit constant suffices for encoding all possible result bits.
In most applications, it is not necessary to define a standalone voting function, but is in fact easier and more efficient to perform the three crucial steps locally wherever voting is needed in the program. Each such insertion requires approximately ten additional bytes of program memory but it spares the processor from the nuissance of context switching.
As has been explained, voting technique is the most effective to apply when waiting at a checpoint is being buit into the program, as it prevents false triggering. But if more refined decision making is necessary, for example when reading out the keyboard, then somewhat more complex
debouncing routines should be taken to consideration. True debouncing is in essence equivalent to low-pass filtering + Schmidt triggering of the digital input signal in time domain meaning that it filters out not only pulse but random noise as well.
Yet the battle with noise and uncertainties to make the world a safer place for digital circuits doesn't stop there. One of the well known tactics can be taught of as a logical extension of binary logic system in the sense that bits are alowed to acquire not only two distinct values (0 or 1), but any partial value between the two extremes. All calculations are then performed on such "soft" bits, with only the final result being outputed as "hard" 0 or 1. This technique is named "fuzzy logic" and is applicable to message decoding and control systems in which not only input channels are noisy, but also the mathematical representation of the system might be ill-defined. Artificial neural networks push this reasoning even further by forming universal mathematical constructs (networks of interconnected "neuron units" that operate on soft bits) that don't even require defining the mathematical background of the system. Instead, such systems are being prepared by "training" using representative data sets letting them to do whatever they "consider" the most appropriate when put in operation...
Techniques mentioned in the last paragraph are two of the leading edge branches of the signal processing theory and greatly surpass the extents of this web-site. We support the courageous readers in exploring the mathematical landscapes of noise and uncertainty further as that is where really interesting computer programming topics such as pattern recognition, artificial intelligence and extraterrestrial signal detection emerge.