Gray code cont !

This surprising algorithm is based on the observation that it it not necessary to know the exact 
parity of A[i] and B[i] but to know whether they have different parities or the same parity. The 
interpretation of the bits E and F is:

        EF = 00 - parity of A and B the same, no change in binary carry
        EF = 01 - parity different and B had the last 1
        EF = 10 - parity different and A had the last 1
        EF = 11 - parity of A and B the same, change in binary carry

CE and CF must both be 0 on completion, otherwise there is an overflow. Refer to  [Luc59] for 
details and a proof that this algorithm is correct. Note the expression for the sum bit which 
represents a change in binary code of the binary sum as occuring when one of the inputs change 
(indicated by A[i] and by B[i]) or the carry changes (indicated by  (E and F)) - this is the same 
equation as for binary addition. 

7. Extensions to Gray codes

Bases other than binary

The original definition of Gray code applied to binary digits. However, it is very straightforward 
to extend the concept of a distance-1 transition to numbers of other bases, or even mixed-base 
numbers. A distance-1 transition is extended to mean a change by 1 in one digit only. The 
algorithms for generation and conversion are straightforward extensions of those for the binary 

For example, to generate the code sequence, for each increment at a given digit, the lower order 
code is generated once,  alternately up and down. Suppose, for example, a code is desired for 
numbers that have a base 5 digit followed by a base 3 digit.
                Natural                         Gray                    Binary
                sequence                        sequence                        code

        0,0                             0,0                             000,00                           
        0,1                             0,1                             000,01  
        0,2                             0,2                             000,11  
        1,0                             1,2                             001,11  
        1,1                             1,1                             001,01  
        1,2                             1,0                             001,00  
        2,0                             2,0                             011,00  
        2,1                             2,1                             011,01   
        2,2                             2,2                             011,11   
        3,0                             3,2                             010,11  
        3,1                             3,1                             010,01  
        3,2                             3,0                             010,00  
        4,0                             4,0                             110,00  
        4,1                             4,1                             110,01  
        4,2                             4,2                             110,11  

If the digits are encoded in canonical binary Gray code then the encoding is itself a binary Gray 
code. Note that the Gray code will not be cyclic unless we are willing to regard a transition from 
the maximum digit to 0 as being single distance. In that case the code will be cyclic only if the base 
of the leading digit is even.

The rules for conversion and counting are also natural extensions of the binary case. In binary we 
invert a digit if the immediately higher order digit is 1, signifying that the low order sequence has 
been repeated an odd number of times and is thus descending. The extension is that a digit is base-
1 complemented if its sequence has been repeated completely an odd number of times. This latter 
fact is easy to test for if the immediately higher digit is of an even base in which case the condition 
is that the digit is odd. However, if odd bases are involved then a digit is complemented when the 
sum of odd-based, immediately-adjacent, higher-order digits is odd.

For example, a number system with bases 4,7,5,2,6:

                Original                Gray                    
                code word               code     word   

        3,2,2,1,4               3,4,2,1,1                                                        
        0,1,0,1,0               0,1,4,0,5                               
There have been many papers exploring the generation of Gray codes in bases other than binary 
[ER84], [Dar72], [Bar81]. [ThMu93] introduces a parallel algorithm for generation, but using the 
power of a reconfigurable bus for fast carry propoagation.

Related concepts

The concept of adjacent symbols differing at one bit position only has been extended in many way. 
A shift of concept usually involves refiguring the algorithms that apply to bit strings to apply to the 
new domain.

Finding the order of selection of + or - in the set of expressions +/- f( +/- f(+/- .......)) where f is 
monotone, so that the results are in order, is found to be the Gray code sequence iteslf [Sal72], 

Algorithms have been developed for the single change set partition sequences [ Kay76],
e.g. ( 1 2 3); (1 2) (3); (1) (2) (3); (1) (2 3); (1 3) (2). Similarly for the partitions of an integer 
[Sav89]. P(5,3): 1+1+1+1+1 = !+1+1+2 = 1+2+2 = 1+1+3 = 2+3. Also for compositions, split 
of n into k parts[Kli82] L(6,3): 2+2+2 = 3 + 2 + 1 = 4 + 1 +1. Others analogous transition 
sequences are covered in [KoRu93] and [Pro85]  

Another path of generalization remains within weighted number systems but seeks variations to its 
properties.  One direction is to look for uniformly weighted codes (those with the same number of 
1 bits) [BER76] and another is for distance-1 codes with the snake in the box property that 
words distance k apart in the counting sequence differ by k bits [Kau70]. There are legions of 
other codes with similar and realted properties studied in the literature.

Paths on the n-cube

 In the binary case, code words of length n can be regarded as the vertices of an n-cube and a 
complete Gray code sequence represents one of the Hamiltonian paths. This can have an 
application in hypercube computer networks. If the nodes are assigned binary numbers then the 
Gray code defines a path that allows a message to be sent ot all processors, once only.

As mentioned earlier the Gray codes are just a small subset of the distance-1 codes and 
Hamiltonian paths.The number of such paths as a function of n is not known, however paths that 
have additional properties have been enumerated [Gil58].

The n-cube can be generalized to a more-complex graph in the case of bases other than binary. 
Paths of special interest have also been studied in this case [ShRa78], [Coh63]. 

8. Applications of Gray codes

Gray codes continually turn out to have new applications. Two of the more-interesting applications 
are considered here.

Gray codes and Walsh functions

Yuen has shown that there is a nice relationship between width n Gray code sequence and the set 
of Walsh functions of length 2n. A set of Walsh functions of length  2n is usually defined as a set 
of discrete-valued functions in an interval with values that are orthogonal [Bea75]. However, they 
may also be regarded as set of 2n binary code words of length that are maximally distant, i.e. each 
word is distance 2n-1 from all others. For example, n=3, a set of 8 length-8 Walsh functions, each 
distance 4 from all others, is:

                                                Gray            Gray            Walsh
                                                rank            code            code
                        0       000             0               000             00000000                
                        1       001             1               001             00001111                
                        2       010             3               011             00110011                
                        3       011             2               010             00111100                
                        4       100             7               111             01010101                
                        5       101             6               110             01011010                
                        6       110             4               100             01100110                
                        7       111             5               101             01101001                

The contrast with Gray code is striking. Walsh are maximally distant, there is no natural sequence 
to the code words. Gray words are minimally distant with a well-defined sequence. It is surprising 
that there is a relationship between the two concepts. 

The algorithm to generate each member of a set of Walsh functions is also delightfully simple:

I} (0I2N-1}
procedure generatew (n, i,: integer);
procedure generatew1 (n, i,: integer; d:bit);
        {d represents an inversion of all bits}
        if n = 0 then output d
    else begin
                generatew(n-1,i div 2,d);
                generatew(n-1,i div 2,d(i mod 2));

The similarity of this algorithm to algorithm A3.1 is striking. As pointed out by Yuen, the number 
of transitions in the Walsh code word is the rank of the binary pattern of i in the Gray code 
sequence, and there must be one word for each possible number of transitions. Furthermore, the 
bits of the corresponding Gray code word may be used to specify the transition points in the Walsh 
word, in the same sequence as the sequence of index changes when generating the Gray sequence. 
So, if I has Gray-rank G with bits G2,G1,G0 then the sequence of changes in Walsh word 
number i is 0,G2,G1,G 2,G0,G2,G1,G2.

 Although there is no natural order as such, one which has reason is where the code words are 
listed in order of number of transitions. This can be generated by replacing d(i mod 2)
by  d(i mod 2) ((i div 2)mod 2)) in the above algorithm, effectively converting 
from binary to Gray en passent.

Analog to digital conversion

The original application of Gray code was in A to D conversion. It is interesting that even with 
fully electronic A-D it appears to be somwhat faster and simpler to convert an analog signal V to a 
Gray code than to convert it to binary. The standard approach, if V is in the range 0 to 2n-1, is to 
determine the first bit by subtracting  2n-1  - if the result is positive then the first bit is 1, if 
negative, the first bit is 0. The process continues with the reduced signal in the first case but with 
the original signal in the second case. There is thereby a decision to be made at each stage that 
slows the process down.

V1 := V;
for i from n-1 downto 0 do begin 
        V2 := V1- 2i-1;
        B[i] := (V20);
        if V20 then V1:= V2; 

However, it is possible to produce the Gray code version of the signal without making decisions, 
though it requires the determination of the absolute value of a voltage (which is realised as a 

V2 := V- 2n-1;
G[n-1] := (V20);
V1 := |V2|;
for i from n-2 downto 0 do begin 
        V2 := (V1- 2i);
        G[i] := (V20);
        V1 := |V2|;     

The fact that the Gray code is produced can be shown by noting that the V1 in the second 
algorithm is the same as the first where B[i] = 1 but is the 2i-1 complement elsewhere. The 
second algorithm treats the first bit in the opposite fashion to the others.

This algorithm has difficulty in dealing with the integer boundary values. For example, in two bit 
codes for signals in the range [0,4) the ranges mapping to 0, 1, 2, 3 are [0,1),[1,2),[2,3], (3,4). 
This is finessed, as is the issue of rounding, by incrementing V by 0.5 before commencing the 

The algorithm was expounded by Yuen [Yue77], [Yue78]. Lippel [Lip78] pointed out that the idea 
was in [Smi56] and attributed there to a 1949 thesis by R. P. Sallen. 

The algorithm above is related to non-restoring division. Yuen [Yue88] showed how it could be 
extended to division and square rooting.

9. Parallel Arithmetic

The Gray code is a non-redundant representation of integers that appears to be far more suited to 
computers than to humans. However, Gray code could only be considered for use in a computer if 
it offered better or comparable cost and performance compared to the standard representation. A 
brief survey of parallel operations on Gray code is thus in order. 

We have seen that binary to Gray conversion is simple and local whereas conversion the other way 
involves calculation of xor at every bit in a word by use of a parallel prefix tree. Apart from this, 
little has been written on parallel operations. We will have to look at more complex operations 

This a practical rather than a theoretical matter. We know that Gray code can be converted to binary 
in O(log n) time and converted back in constant time. Hence any operation that can be done in 
O(log n) time with standard representation can be done in O(log n) time in Gray code. What we 
really want to discover is whether the cost and speed working within Gray is comparable to the 
cost/speed of conversion and doing the work in binary. This is a technology-dependent question. 
There are many traps to avoid. There is little point, for example, in a new algorithm that adds an 
xor delay to each level in the binary circuit as this will be similar to making the conversion first.

Because the interpretation of the weights of Gray-code bits depends on the words parity, 
arithmetic is much faster if the parity is known in advance. In many computers, where parity is 
used for error checking in memory, it is possible to maintain the parity at very little extra cost, so it 
is reasonable to assume that the parity is known, but we will consider the situation where the parity 
is known and when it is not known.


Lets deal with counting up, only.

When counting up an n-bit Gray Code the next bit s to be switched is given by:

                P(G) = 0        then s = 0
                P(G) = 1        then s is such that Gs-1=1 and Gi=0, i>s-1

If we calculate the terms: and

                Ci = Gi-1 and (not Gi-2 ) and ..... and  (not G0 )       (i>0)

Writing P(G) as P. Then we have the result of incrementing G to give S as:

                Si = P  Ci  Gi         (i>0)
                S0 = (not P)  G0        

Compare this with incrementing in binary:

                Di = Bi-1  and  Bi-2  and ..... and  B0          (i>0)  
                Si = Di  Bi                     (i>0)
                S0 = not B0             

There is little significant difference between calculating in parallel the terms Ci and the terms Di . 
The main complication is the extra xor in the calculation of the output terms Si.  However, if P is 
known in advance and propagated in parallel with the generation of the terms Ci then the extra term 
in the xor amounts to wider and-gates and increased cost, but little performance loss. 

If P is not known in advance then it can be evaluated and propagated in parallel to determining the 
terms Ci then much of the time taken in forming P will be overlapped. However, it will still 
dominate performance, and need to be followed by the last xor which will cause an additional 
delay. However, the total time should be well within that taken for most computer clock cycles. 


This is interesting. Comparing Lucals algorithm with the standard serial addition we can see that 
the main difference is the propagation of two carries rather than one. 

        E := PA; F := PB;
        for i:= 0 to n-1 do begin {in parallel}
                S[i] := (E and F)  A[i]  B[i];
                E        := (E and (not F))  A[i];
                F        := ((not E) and F)  B[i];
        CE := E; CF := F; 

        C := 0;
        for i:= 0 to n-1 do begin {in parallel}
                S[i] := C  A[i]  B[i];
                C        := ( A[i] and B[i]) or (B[i] and C) or (C and  A[i]);
        CO := C; 

The sum equations are similar and each carry term in the Lucal form is of comparable complexity, 
when expanded, to the binary carry, though there are some 3-input terms.

                E        := (E and (not F) and (not A[i])) or
                                ((not E) and A[i]) or  (F and A[i]);
                F        := (F and (not E) and (not A[i])) or
                                ((not F) and A[i]) or (E and A[i]);

We can apply the carry lookahead technique to the Gray code, just as to the binary case. However, 
whereas in binary there are only 2 possible carry conditions (C= 0, C=1) there are 4 conditions for 
Gray code (EF = 00, 01, 10, 11).  

In the binary case the basic recursion step involves two equations (to find the carry out, assuming 
carry-in of 0 and 1), each being the or of two terms each of which is a two input and. Assuming 
that cost is proportional to gate inputs, this is a cost of 2*6 = 12 per recursion step. 

For the Lucal equations, to generate the carry we will need 4 equations, each a 4-input sum of 3-
input terms for a cost of 4 * (4*3+4) = 64. A rough estimate would put the cost of the parallel 
Gray carry look-ahead as 5.3 times that of binary (it is not that bad because the initial terms are 
simpler). However, the speed would be of the same order, involving the same number of levels of 
logic. Of course, the standard carry look-ahead is made faster by a factor of about 2 by combining 
two levels of recursion and using 4-input gates. This would be harder with the Gray case as it 
would involve gates of 6 inputs.

The bottom line is that a carry-lookahead Gray adder, is feasible but at a considerable cost, and, 
whatever way you look at it, a significant performance penalty as well. Not the note we would like 
to end on.

10. Summary

The algorithms and circuits involving Gray-codes are of particular interest because they are so 
simple and surprising. The simple Gray code offers a dense counting sequence that is not very 
useful for humans but has the potential of being more natural for machines.

However, when it comes to practical and long-lasting use of the codes it does turn out Gray code 
does not offer significant advantage over conventional representation. Indeed, Gray encoding 
usually gives rise to more complexity. 

Be that as it may, Gray code continues to turn up in diverse areas, some of which are listed in the 
bibliography that follows.

Back To Main Page: