ware repair - fargo computer repair

How SRAM Memory Works

How SRAM Memory Works

Written by Diane Diane Ware - November 07, 2016 in Computers

Introduction

SRAM memory - ever hear of it before? Well, if you haven't (or even if you have!), and you're curious about the workings of a high-speed, efficient memory usually found right within or near the CPU (central processing unit) of a computer or cell phone, then read on. 

Computers, and other microcontrollers such as those found in cell phones and microwave ovens, use various kinds of memory to process data and store data temporarily. The data can be accessed from the hard drive or other digital storage sources, such as from the Internet or external USB flash drives.

Here are the main types of memory. 

1. Level 1 and Level 2 Cache
2. Virtual memory (on the hard drive)
3. Normal system RAM (RAM sticks that are typically inserted into slots on the motherboard)
4. Hard disk (hard drive)

 

I will give details of the above listed memories in the order from number 4 on up to number 1. 

Computer memory is normally a temporary work space for a computer or microcontroller. Temporary work space is where the CPU can bring out data from permanent storage places, like the hard drive, to do computations upon the data. Whereas storage devices usually keep data permanently, computer memory information usually disappears once the computer or cell phone is turned off or loses its power source. So a hard disk, or your computer's hard drive, is typically used to store documents, pictures, and other data. 

However, the reason a hard disk (hard drive) is listed as a type of memory is for two reasons. One reason is that web pages can be stored on the hard drive to be accessed later. This is a form of memory for your phone or computer, since your usual web page will just be loaded to your browser from the hard drive instead of placing it in memory. This saves on time and memory space. The second reason is due to virtual memory.  Part of the hard drive can be used as virtual memory, a location on the hard drive that can be used by the CPU in place of RAM memory. This saves on memory space too, but not so much on time. 

RAM (random access memory) is the usually memory most of us are familiar with, as seen below in the image of RAM sticks. RAM memory does use transistors that will be described below in this blog post and which are very crucial to SRAM memory, but RAM sticks also use capacitors to hold input bit value of 0 or 1. To read more about RAM memory and other kinds of memory, see this link

SRAM (static random access memory) memory, the main theme of this blog post, has to do with logic gate flip-flops or rather gated D latches. SRAM is usually the type of memory used for the CPU's L1, L2, and L3 cache memory, memory that is placed nearby so that the CPU has quick access to the memory while processing programming instructions. 

 

The components for the logic gates in SRAM will be described below in order from the bottom level details on up to the more complicated details. 

I just want to say before continuing that when I was in college, one of my favorite courses dealt with these very bottom level details of how computers function. I loved that course! We dealt with how transistors, CPUs, and other computer components work. We also studied assembly language, the computer language that interacts with the CPU, memory, and motherboard buses to compute the most basic of procedures that any computer or microcontroller (microcontrollers are found in cell phone, your car's computer, microwave ovens, calculators, etc.) must accomplish, to well, be a computer! I found this college course fascinating, since it gave me a view into how it all works, and so I hope you will find the same too as you read on. 

 

CMOS Transistors

CMOS transistors are complimentary metal-oxide semiconductors. 

Valence electrons are the electrons located on the outer most shell of an atom that can be transferred or shared with another atom. 

There are two types of CMOS or MOS transistors: the p-type and the n-type. 

N-types are composed of silicon wafers doped with phosphorus. Silicon normally has four valence electrons. But when phosphorus is added, an element that normally has five valence electrons, one extra electron will be left to flow freely through the transistor. 

P-types are composed of silicon wafers doped with boron. As mentioned above, silicon normally has four valence electrons. But when boron is added, an element that normally has three valence electrons, a positive charge "hole" will be able to flow through the transistor. 

With a n-type MOS transistor, if 2.9 volts is allowed to pass through the circuit of the transistor (important: the conventional means of showing electric current is from positive to negative, read this for more), the p-type silicon wafer is repulsed by all the positive charges and so current, as in positive charges, rushes toward the negative, or n-type side, and the circuit is closed, typically interpreted as a 1. However, if 0 volts is applied, then no current can pass through, so this is interpreted as a 0.

The opposite is the case for a p-type MOS, as can be seen below: 

Now, how is this all significant to SRAM? Continue reading to find out more. 

 

Logic Gates

Logic Gates are extremely important to everything dealing with computers. And the NOT logic gate is at the most fundamental level in any computer system. The NOT logic gate is in essence what computers are all about - the binary system. The binary system is based upon 0 or 1. Something is either on (1) or it is off (0). NOT logic gates, and other logic gates, as well as extremely complicated variations of them, form the entire basis of everything we know and can find useful from computers, cell phones, robotics, and even AI (Artificial Intelligence). That's all it is -  the lowly little electrical circuit, of either being able to be on (1) or off (0) is the foundation of everything tech-related. One of my professors made a statement that I will always remember. He explained that computers are very dumb machines that just happen to be able to do things very, very fast. And that's it. It's just about the extremely fast flow of electricity through massive amounts of logic gates, combined too with a clock pulse to regulate the electric flow. 

The NOT gate works like so: 

Electric current flows down from the volt source. Current can flow to the input, depending upon whether the input is a 0 or a 1 input value. In these examples, 1 is representative of 2.9 volts, and 0 is representative of 0 volts. When 2.9 volts flows down from the volt source, and 0 volts is the input, then the p-type MOS gate will allow the current to proceed through, sort of like a bridge or a small connection of wire, and it will head out at the output wire, since the n-type MOS gate will be closed due to the 0 volts. Hence, an input of 0 gives an output of 1. When 2.9 volts flows down from the volt source, and 2.9 volts is the input, then the p-type MOS gate will not allow the current flow to continue. Instead the input of 2.9 volts heads toward the n-type MOS gate, which allows current to flow through and on to ground. Therefore, the output never receives any current, and the output is nothing, a 0. 

By the way, a NOT gate is also called an inverter. 

Things get more interesting when there are two inputs. 

In the image below, NOT-type gates are positioned in just the right manner to produce the output at C. By using the NOT gate information I just listed above, and keeping in mind that the volt source is on and is flowing, see if you can determine the results of the table in the image below. That table is called a truth table and as you can see, the typical output of a NOR gate with two inputs that are 00, 01, 10, or 11 is the values in the C column. 

Normally, when considering the term OR, we imagine a choice. Do you want coffee or do you want milk? Either one could be a yes, or be true, or you could even want both. But neither choice would be no, or a false. And the designers of the NOR gate realized this too, that OR is quite the handy logic statement. However, the NOR circuit above delivers just the opposite results. So, in order to compensate for this situation, the original designers of the NOR gate simply added an inverter, a NOT gate, to the NOR result, to obtain the column under the final output at D: 

Using this is the same process, they also developed the AND gate, by taking the outputs of a NAND gate and using an inverter, a NOT gate, to give the final AND results: 

As can be seen by the truth table above for the AND gate, when A and B are both 1, then C is 0. But when A and B are both 0's or 0 and 1 combinations, then C is 1. However, the nature of AND is about having a statement where both parts of the statement must be true in order for the entire statement to be true. Example: if the sky is clear-blue and the sun is bright, then I consider it a nice day. According to my reasoning, you can't have one and not the other, and you can't have neither one, in order for my particular statement to be true. Only when the sky is clear-blue and the sun is bright is it a nice day. 

Like I mentioned earlier, logic gates are the lower fundamental level, the foundation, of all microcontrollers and computers. The NOT, OR, and AND gates are said to be logically complete because computer engineers can build circuits to do any specification of any truth table by just using these three gates. But including NAND and NOR gates makes it even easier.

And including the exclusive OR gate, called a XOR gate, and the exclusive NOR gate, called a XNOR gate, gives even more options for designing logic circuits. Take a look at the truth tables below. For the XOR, only when there is a 0 and a 1 will the XOR give a result of 1. A 0 and another 0, or a 1 and another 1 will give 0 as the result. For the XNOR, it will give the opposite of this result. That is, only when there are both 0's for inputs or both 1's for inputs will the XNOR output a 1: 

One more topic before moving on to the next section. Notice in the example below, of a three input AND gate with one output, that logic gates can certainly perform calculations on more than two inputs at a time. Only if A, B, and C are all 1's will D output a 1: 

Combinational Logic Circuits

Well, so far, we have CMOS transistors which are used to build logic gates. And now, with those logic gates, we can build logic structures. 

And don't worry. All this info so far is definitely needed for the buildup on how SRAM works. 

There are two kinds of logic structures - those that take inputs and provide outputs, i.e. combinational logic circuits, and those that can store information. In this section, I'll briefly cover some of the common combinational logic circuits: the decoder, the MUX, the half and full adder, and the programmable logic array (PLA). 

The decoder is such that exactly one of its outputs is 1, while the rest are 0's. Two values coming in, either 00, 01, 10, or 11 will give a specific four digit output (consisting of only 1s and 0s, at  D1, D2, D3, and D4) with just one of these digits as a 1. This is very helpful for a CPU to determine the type of instruction that it will process next. 

A MUX, also referred to as a multiplexer, has the function that it takes an input and connect it with the output. Shown below is a MUX with two-inputs and one selector and a MUX with four inputs and two selectors. Look at the truth tables of both MUX circuits to see the outcomes. For the two-input MUX, if the selector S is 0, then it will choose the corresponding values of the A0 column of inputs. If the selector S is 1, then it will choose the corresponding values of the A1 column of inputs. The same situation occurs for a four input, two selector MUX, except that it is best to understand that binary 00 is 0, binary 01 is 1, binary 10 is 2, and binary 11 is 3. So when S1 and S2 are 00, then the values at D0, whether 0 or 1, will be outputted at C (also be aware that the x's are just there as place holders to whatever 0 or 1 values could be in those spots so you can instead focus on the selected 1's or 0's).  And so the same for D1, when S1 and S2 are 01, then whatever is the value at the D1 input spot, it will be outputted to C. And so on for the rest of the D inputs. In general, a MUX has 2^n inputs and n select lines.

A half adder and a full adder, shown below, are two other combinational logic circuits. To understand these, it is best to understand now binary math works. Using just two bits, a bit being either a 1 or a 0, then 0 + 0 = 0, 0 + 1 = 1, 1 + 0 = 1, and 1 + 1 = 10 (10 is 2 in binary). So these previous math equations are best dealt with using two places holders to reveal best the answers, i.e. 0 + 0 = 00, 0 + 1 = 01, 1 + 0 = 01, and 1 + 1 = 10. But when you are working with only one column each per addition computation, that is why a carry will be needed, just like when adding 9 + 8 within 49 + 28. You write down the 7 and carry the 1 over to add in with the 4 + 2, for the final answer of  77. To have a better understanding of binary math and computations with binary math, visit this site or this site here. Also notice below that for the half adder, a XOR gate and an AND gate were used to get the necessary results, simply because it was seen that the sum column resulted in a XOR truth table result and the carry column resulted in an AND truth table result.

The full adder has three inputs, C-in, B, and A, a sum column, and a carry column. Likewise with the half adder, it was determined that two XOR gates, two AND gates, and a OR gate, positioned as seen below, would get the accurate results needed. For more information on half adders and full adders, visit either this site or another good site here.

A programmable logic array (PLA) is a very common building block for implementing any collection of logic gates/functions that may be necessary to complete a task. The programmable logic array consists of an array of AND gates (an AND array) followed by an array of OR gates (an OR array), with the number of AND gates corresponding to the number of input combinations or rows in the truth table. The number of OR gates corresponds to the number of output columns in the truth table. For n input logic functions, a PLA with 2^n n-input AND gates are needed. And basically the implementation algorithm is simply to connect the output of an AND gate to the input or an OR gate if the corresponding row of the truth table produces an output of 1 for that output column, and hence is where the term programmable comes from, since the connections from the AND gate outputs to the OR gate inputs are programmed as such to implement the desired logic functions. I know this is rather a brief, complicated explanation, and so I am recommending this link and this one also to further explain the PLA.  

This concludes the section on combinational logic circuits. If you would like to learn more about this overall subject, I would suggest either this LINK or this one: LINK

 

Basic Storage Elements

Well, I hope I haven't overwhelmed you so far. My main goal with the explanations of the various combinational logic circuits is to have you become aware of the possibilities that exist when combing the basic NOT, OR, XOR, and AND gates (and their opposite gates, such as the NOR). I am hoping you can see already the huge possibilities that exist from all the possible combinations of these gates.


Ultimately, however, most of these gates are used for temporary values. This is especially true for SRAM in the CPU.

But again, notice that I mentioned *temporary values*. That is key. Yes these various gates can do computations, but they are fleeting values only used for a moment and then their values are passed onto the next gate or gates for further computations. 

And that is where the basic storage elements come in, the foundation of SRAM memory. 

 

S-R Latch, Gated D Latch, and 4 by 3 Bit Memory Layout

The S-R latch is shown below. Be aware that the S-R latch is also referred to as a flip-flop circuit. The S stands for set and the R stands for reset, and the value at a is the value that is needed to be stored. This latch works by starting off with the quiescent (quiet) state. This is where S and R inputs both have a logic value of 1. If a is also 1 (since for instance it is storing the value 1), then A is also 1. And since R is 1 now too, then the value of b will be the output of two 1's passing through a NAND gate, which is 0. And, since B is therefore 0 also, then the result of S = 1 and B = 0, passing through a NAND gate, will still be a = 1. As long as S = 1 and R = 1, then a = 1 will keep this state. The reverse holds true for storing a = 0, i.e. then A must be 0 too, and so the combined output of A and R, 0 and 1, gives a NAND result of b = 1.So then B = 1 and S = 1, giving a NAND output of a = 0. And again, as long as S = 1 and R = 1, then a will remain at 0. 

In order to set the value of a in the first place, say to a = 1, S is momentarily set to 0 but at the same time keeping R set to 1. And in the same manner, to set a = 0, the reverse is true. And then, once the value of a is set, then S and R must both be set back to 1 to keep the a value. 

Note that S and R should never both be 0. 

I hope you are noticing the important situation at hand. As long as a computer, cell phone, or other device has a power source to keep electric charge running through it, SRAM memory will hold a value as long as S and R are both 1. Hence this is why the S-R latch is considered a storage element.

But more is needed. The gated D latch has the ability to control the S-R latch. Notice below how the value at a can be set. 

Now, finally, since we have the gated D latch, more can be accomplished. Take a look at the image below. 

As you can see in the above image, four gated D latches can have their values set at one time. Notice that Qx (x being any number of 0 to 3) now replaces the a for each value that the gated D latches can hold. Similarly to the single gated D latch I described above, each latch can be set by changing the values of WE, Dx, and S and R. The above is considered a 4-bit register, that can hold a 4 bit value, which is often called a word. So, it can hold a 4-bit word, for example 0101. 

Now, finally, it can be seen how logic gates positioned in certain configurations can accomplish SRAM memory. 

Look at the image below. It is a 4 bit by 3 bit SRAM memory. If you notice, on the left side of the SRAM memory, colored in a tan block area, there is a two input decoder present. If the inputs of A are 00, then a 1 is present. This will in turn combine with a 1 from WE, if a 1 or 0 needs to be set in the first row of 3 bits. Depending upon what D[2], D[1], and D[0] are, then D[2], D[1], and D[0] will have values set, such as 101. Notice too that the other three tan block areas are three 4-input MUX configurations. 

Before showing another image of the 4-bit by 3-bit SRAM, I want to explain some things about computer memory in general. Bits are often stored as self-contained units. A register stores a number of bits as self-contained units, though some registers will just store one bit. Also, there is addressability. Addressability is the number of bits in each memory location, usually as a row of bits such as can be seen above in the image. The addressability of the memory above is 3 bits. However, most RAM memory and other memory is 8 bits (and 8 bits is one byte) addressability. And each byte is a memory address. If you have 16 MB (16 megabytes) of memory, then that means approximately 16 million memory locations, each containing one byte of data. Address space is the total number of uniquely identifiable locations. 16 MB means that there are approximately 16 million uniquely identifiable memory locations. In the image above, there are four uniquely identifiable memory locations, so the memory above has four locations, and addressability is 3 bits. Memory of size 2^2 requires that there are two bits to specify the addresses. So, for example, a memory size of 4 GB (4 gigabytes), about 2^32, would require 32 bits to specify all the available addresses. Most computers nowadays have 64 bit operating systems. But most computers with 64 bit operating systems still only average about 4 to 8 GB of RAM. Theoretically speaking, though, amazingly enough, a 64 bit operating system could accommodate up to 16 exabytes, which is about 16.8 million terabytes of RAM! For a very interesting discussing of how much actual space this would take up (think a country-size amount!) see this article.

However, I don't want to head off in another direction here. Let's get back to SRAM memory.

Take a look at this image again that I posted earlier: 

This is older SRAM L1 and L2 caches positioned right near the CPU. Though integrated circuits look quite similar to the makeup of RAM sticks, keep in mind that though RAM and variations of RAM use transistors with logic gates, their main storage elements are capacitors which need to be refreshed very often to keep the necessary 1s and 0s remaining within memory. 

Below is the one last image I wanted to show. This image shows the process for reading location 3 (11 in binary) of the same 4 (2^2) bit by 3 bit memory within the image shown earlier. 

In the above memory, the goal is to read the data in memory location 11 (3 in usual base 10). So the address given at A[1:0] = 11, which means, that after going through the decoder on the left side of the memory, the bottom line, with all the logic gates appearing in orange, is the line that will be read. Note that the three other decoder outputs are not read, or "asserted", as this is often called, since they each have values of 0. The value stored at location 11 is 101. These three bits are then each ANDed with their word line values (the 101) and the 1 value that flows from the decoder's original output of 1 (since that is the address that was picked from the decoder). So the output is D[2] = 1, D[1] = 0, and D[0] = 1. Note that all the other address locations have 0's for outputs (since they have been produced by ANDing with unasserted word lines) and when all these values head on down the three MUX gates to the bottom, where the D[x] output are located, their values will not be given by that final OR gate at the bottom. So the output of  D[2] to D[0], which is often denoted as D[2:0], is 101, only what is located in address 11. 

Memory can be written to the address locations in a similar manner. Say 111 is to be written at address location 11. Then in the same manner, the decoder on the left side of the SRAM memory will have only 11 asserted, so then WE gets asserted (is 1) as well, so then 111 will be put into each gated D latch on address location 11. 

In summary, SRAM memory is produced via CMOS or MOS transistors positioned in such manner as to allow the alignment of the necessary logic gates, such as NOT gates, AND gates, and OR gates, so as to set and keep bit values in word lines. 

I hope that this blog post left you more informed than confused, since I know all this logic gate stuff is not easy to grasp the first time around :-)

But at least now you have a better understanding of how a computer and cell phone functions at its most lower, basic level and how it can manipulate and store binary data of 0's and 1's. 

The main theme of this blog post was taken from the wonderful book I had in the college course I mentioned earlier, the class I enjoyed so much. The book is: Introduction to Computing Systems: From Bits and Gates to C and Beyond, by Yale N. Patt and Sanjay J. Patel . Also here is a college course outline that details many of the topics I discussed in this post. Just click on some of the handout links under the Course Outline section to find out more about each topic.

I plan on writing another blog in the near future similar to this but based upon how a CPU works with SRAM and some of the necessary logic gates and circuits. 

Thank you for reading this blog post! 

In this article:
ram
speed
software
performance
Diane

Diane Ware

Manager - Tech Support

I am co-owner of Ware Repair and enjoy working with technology, web design and development, and embedded programming, especially that which applies to robotics. I also enjoy writing blogs and sci-fi novels on my free time.