From Wikipedia, the free encyclopedia
Jump to: navigation, search
Dynamic random access memory (DRAM) is a type of random access memory that stores each bit of data in a separate capacitor within an integrated circuit. Since real capacitors leak charge, the information eventually fades unless the capacitor charge is refreshed periodically. Because of this refresh requirement, it is a dynamic memory as opposed to SRAM and other static memory. Its advantage over SRAM is its structural simplicity: only one transistor and a capacitor are required per bit, compared to six transistors in SRAM. This allows DRAM to reach very high density. Like SRAM, it is in the class of volatile memory devices, since it loses its data when the power supply is removed.
History
This short section requires expansion.
Schematic drawing of original designs of DRAM patented in 1968.
1964 Arnold Farber and Eugene Schlig working for IBM created a memory cell that was hard wired; using a transistor gate and tunnel diode latch, they later replaced the latch with two transistors and two resistors and this became known as the Farber-Schlig cell. 1965 Benjamin Agusta and his team working for IBM managed to create a 16-bit silicon chip memory cell based on the Farber-Schlig cell which consisted of 80 transistors, 64 resistors and 4 diodes. 1966 DRAM was invented by Dr. Robert Dennard at the IBM Thomas J. Watson Research Center and he was awarded U.S. patent number 3,387,286 in 1968. Capacitors had been used for earlier memory schemes such as the drum of the Atanasoff–Berry Computer, the Williams tube and the Selectron tube.
It is interesting to note that the Toshiba "Toscal" BC-1411 electronic calculator, which went into production in November 1965, appears to use a form of dynamic RAM built from discrete components.[1]
In 1969, Honeywell asked Intel to make a DRAM using a 3-transistor cell that they had developed. This became the Intel 1102 (1024x1) in early 1970. However the 1102 had many problems, prompting Intel to begin work on their own improved design (secretly to avoid conflict with Honeywell). This became the first commercially available 1-transistor cell DRAM, the Intel 1103 (1024x1) in October 1970 (despite initial problems with low yield, until the 5th revision of the masks).
The first DRAM with multiplexed row/column address lines was the Mostek MK4096 (4096x1) in 1973. Mostek held an 85% market share of the dynamic random access memory (DRAM) memory chip market worldwide, until being eclipsed by Japanese DRAM manufacturers who offered equivalent chips at lower prices.
Operation principle
Principle of operation of DRAM read, for simple 4 by 4 array.
Principle of operation of DRAM write, for simple 4 by 4 array.
DRAM is usually arranged in a square array of one capacitor and transistor per cell. The illustrations to the right show a simple example with only 4 by 4 cells (modern DRAM can be thousands of cells in length/width).
The long lines connecting each row are known as word lines. Each column is actually composed of two bit lines, each one connected to every other storage cell in the column. They are generally known as the + and − bit lines. A sense amplifier is essentially a pair of cross-connected inverters between the bit lines. That is, the first inverter is connected from the + bit line to the − bit line, and the second is connected from the − bit line to the + bit line. This is an example of positive feedback, and the arrangement is only stable with one bit line high and one bit line low.
To read a bit from a column, the following operations take place:
1. The sense amplifier is switched off and the bit lines are precharged to exactly matching voltages that are intermediate between high and low logic levels. The bit lines are constructed symmetrically to keep them balanced as precisely as possible.
2. The precharge circuit is switched off. Because the bit lines are very long, their capacitance will hold the precharge voltage for a brief time. This is an example of dynamic logic.
3. The selected row's word line is driven high. This connects one storage capacitor to one of the two bit lines. Charge is shared between the selected storage cell and the appropriate bit line, slightly altering the voltage on the line. Although every effort is made to keep the capacitance of the storage cells high and the capacitance of the bit lines low, capacitance is proportional to physical size, and the length of the bit lines means that the net effect is a very small perturbation of one bit line's voltage.
4. The sense amplifier is switched on. The positive feedback takes over and amplifies the small voltage difference until one bit line is fully low and the other is fully high. At this point, the column can be read.
5. At the end of a read cycle, the row values must be restored to the capacitors, which were depleted during the read: the bit line of the storage cell is also driven to full voltage (refreshed) by the action of the sense amplifier. Due to the length of the bit line, this takes significant time beyond the end of sense amplification.
To write to memory, the row is opened and a given column's sense amplifier is temporarily forced to the desired state and drives the bit line which charges the capacitor to the desired value. The amplifier will then drive the bit lines to the desired state and hold it stable even after the forcing is removed.During a write to a particular cell, the entire row is read out, one value changed, and then the entire row is written back in, as illustrated in the figure to the right.
Typically, manufacturers specify that each row should be refreshed every 64 ms or less, according to the JEDEC(Foundation for developing Semiconductor Standards ) standard. Refresh logic is commonly used with DRAMs to automate the periodic refresh. This makes the circuit more complicated, but this drawback is usually outweighed by the fact that DRAM is much cheaper and of greater capacity than SRAM. Some systems refresh every row in a tight loop that occurs once every 64 ms. Other systems refresh one row at a time -- for example, a system with 213 = 8192 rows would require a refresh rate of one row every 7.8 µs (64 ms / 8192 rows). A few real-time systems refresh a portion of memory at a time based on an external timer that governs the operation of the rest of the system, such as the vertical blanking interval that occurs every 10 to 20 ms in video equipment. All methods require some sort of counter to keep track of which row is the next to be refreshed. Some DRAM chips include that counter; other kinds require external refresh logic to hold that counter. (Under some conditions, most of the data in DRAM can be recovered even if the DRAM has not been refreshed for several minutes.[2])
Memory timing
There are many numbers required to describe the speed of DRAM operation. Here are some examples for two speed grades of asynchronous DRAM, from a data sheet published in 1998:[3]
"50 ns"
"60 ns"
Description
tRC
84 ns
104 ns
Random read or write cycle time (from one full /RAS cycle to another)
tRAC
50 ns
60 ns
Access time: /RAS low to valid data out
tRCD
11 ns
14 ns
/RAS low to /CAS low time
tRAS
50 ns
60 ns
/RAS pulse width (minimum /RAS low time)
tRP
30 ns
40 ns
/RAS precharge time (minimum /RAS high time)
tPC
20 ns
25 ns
Page-mode read or write cycle time (/CAS to /CAS)
tAA
25 ns
30 ns
Access time: Column address valid to valid data out (includes address setup time before /CAS low)
tCAC
13 ns
15 ns
Access time: /CAS low to valid data out
tCAS
8 ns
10 ns
/CAS low pulse width minimum
Thus, the generally quoted number is the /RAS access time. This is the time to read a random bit from a precharged DRAM array. The time to read additional bits from an open page is much less.
When such a RAM is accessed by clocked logic, the times are generally rounded up to the nearest clock cycle. For example, when accessed by a 100 MHz state machine (i.e. a 10 ns clock), the 50 ns DRAM can perform the first read in 5 clock cycles, and additional reads within the same page every 2 clock cycles. This was generally described as "5-2-2-2" timing, as bursts of 4 reads within a page were common.
When describing synchronous memory, timing is also described by clock cycle counts separated by hyphens, but the numbers have very different meanings! These numbers represent tCAS-tRCD-tRP-tRAS in multiples of the DRAM clock cycle time. Note that this is half of the data transfer rate when double data rate signaling is used. JEDEC standard PC3200 timing is 3-4-4-8[4] with a 200 MHz clock, while premium-priced high-speed PC3200 DDR DRAM DIMM might be operated at 2-2-2-5 timing.[5]
Cycles
time
Cycles
time
Description
tCL
3
15 ns
2
10 ns
/CAS low to valid data out (equivalent to tCAC)
tRCD
4
20 ns
2
10 ns
/RAS low to /CAS low time
tRP
4
20 ns
2
10 ns
/RAS precharge time (minimum precharge to active time)
tRAS
8
40 ns
5
25 ns
Minimum row active time (minimum active to precharge time)
It is worth noting that the improvement over 10 years is not that large. Minimum random access time has improved from 50 ns to tRCD + tCL = 35 ns, and even the premium 20 ns variety is only 2.5× better. However, the DDR memory does achieve 8 times higher bandwidth; due to internal pipelining and wide data paths, it can output one word every 2.5 ns, while the EDO DRAM can only output one word per tPC = 20 ns.
Errors and error correction
Main article: ECC memory#Error-correcting_memory
Electrical or magnetic interference inside a computer system can cause a single bit of DRAM to spontaneously flip to the opposite state. Some research has shown that the majority of one-off ("soft") errors in DRAM chips occur as a result of cosmic rays, which may change the contents of one or more memory cells, or interfere with the circuitry used to read/write them - there is some concern that as DRAM density increases further, and thus the components on DRAM chips get smaller; whilst at the same time operating voltages continue to fall, DRAM chips will be affected by such radiation more frequently - since lower energy particles will be able to change a memory cell's state. On the other hand, smaller cells make smaller targets, and moves to technologies such as SOI may make individual cells less susceptible and so counteract, or even reverse this trend.
This problem can be mitigated by using DRAM modules that include extra memory bits and memory controllers that exploit these bits. These extra bits are used to record parity or to use an ECC. Parity allows the detection of a single-bit error (actually, any odd number of wrong bits). The most common error correcting code, Hamming code, allows a single-bit error to be corrected and (in the usual configuration, with an extra parity bit) double-bit errors to be detected.
Error detection and correction in computer systems seems to go in and out of fashion. Seymour Cray famously said "parity is for farmers" when asked why he left this out of the CDC 6600.[1] He included parity in the CDC 7600, and reputedly said "I learned that a lot of farmers buy computers." 486-era PCs often used parity.[citation needed] Pentium-era ones mostly did not. Wider memory buses make parity and especially ECC more affordable. Current microprocessor memory controllers generally support ECC but most non-server systems do not use these features. Even if they do, it is not clear that the software layers do their part.
Memory controllers in most modern PCs can typically detect, and correct errors of a single bit per 64 bit "word" (the unit of bus transfer), and detect (but not correct) errors of two bits per 64 bit word. Some systems also 'scrub' the errors, by writing the corrected version back to memory. The BIOS in some computers, and operating systems such as Linux, allow counting of detected and corrected memory errors, in part to help identify failing memory modules before the problem becomes catastrophic. Unfortunately, most modern PCs are supplied with memory modules that have no parity or ECC bits.
Error detection and correction depends on an expectation of the kinds of errors that occur. Implicitly, we have assumed that the failure of each bit in a word of memory is independent and hence that two simultaneous errors are improbable. This used to be the case when memory chips were one bit wide (typical in the first half of the 1980s). Now many bits are in the same chip. This weakness does not seem to be widely addressed; one exception is Chipkill.
A reasonable rule of thumb is to expect one bit error, per month, per gigabyte of memory. Actual error rates vary widely.[ecc]
DRAM packaging
For economic reasons, the large (main) memories found in personal computers, workstations, and non-handheld game-consoles (such as Playstation and Xbox) normally consists of dynamic RAM (DRAM). Other parts of the computer, such as cache memories and data buffers in hard disks, normally use static RAM (SRAM).
General DRAM packaging formats
DDR2 SDRAM packages
Common DRAM packages
EDO DRAM memory module
Dynamic random access memory is produced as integrated circuits (ICs) bonded and mounted into plastic packages with metal pins for connection to control signals and buses. Today, these DRAM packages are in turn often assembled into plug-in modules for easier handling. Some standard module types are:
· DRAM chip (Integrated Circuit or IC)
o Dual in-line Package (DIP)
· DRAM (memory) modules
o Single In-line Pin Package (SIPP)
o Single In-line Memory Module (SIMM)
o Dual In-line Memory Module (DIMM)
o Rambus In-line Memory Module (RIMM), technically DIMMs but called RIMMs due to their proprietary slot.
o Small outline DIMM (SO-DIMM), about half the size of regular DIMMs, are mostly used in notebooks, small footprint PCs (such as Mini-ITX motherboards), upgradable office printers and networking hardware like routers. Comes in versions with:
§ 72 pins (32-bit)
§ 144 pins (64-bit)
§ 200 pins (72-bit)
o Small outline RIMM (SO-RIMM). Smaller version of the RIMM, used in laptops. Technically SO-DIMMs but called SO-RIMMs due to their proprietary slot.
· Stacked v. non-stacked RAM modules
o Stacked RAM modules contain two or more RAM chips stacked on top of each other. This allows large modules (like 512mb or 1Gig SO-DIMM) to be manufactured using cheaper low density wafers. Stacked chip modules draw more power.
Common DRAM modules
Common DRAM packages as illustrated to the right, from top to bottom:
1. DIP 16-pin (DRAM chip, usually pre-FPRAM)
2. SIPP (usually FPRAM)
3. SIMM 30-pin (usually FPRAM)
4. SIMM 72-pin (so-called "PS/2 SIMM", usually EDO RAM)
5. DIMM 168-pin (SDRAM)
6. DIMM 184-pin (DDR SDRAM)
7. DIMM 240-pin (DDR2 SDRAM/DDR3 SDRAM)
Variations
Asynchronous DRAM
This is the basic form, from which all others are derived. An asynchronous DRAM chip has power connections, some number of address inputs (typically 12), and a few (typically 1 or 4) bidirectional data lines. There are four active low control signals:
· /RAS, the Row Address Strobe. The address inputs are captured on the falling edge of /RAS, and select a row to open. The row is held open as long as /RAS is low.
· /CAS, the Column Address Strobe. The address inputs are captured on the falling edge of /CAS, and select a column from the currently open row to read or write.
· /WE, Write Enable. This signal determines whether a given falling edge of /CAS is a read (if high) or write (if low). If low, the data inputs are also captured on the falling edge of /CAS.
· /OE, Output Enable. This is an additional signal that controls output to the data I/O pins. The data pins are driven by the DRAM chip if /RAS and /CAS are low, and /WE is high, and /OE is low. In many applications, /OE can be permanently connected low (output always enabled), but it can be useful when connecting multiple memory chips in parallel.
This interface provides direct control of internal timing. When /RAS is driven low, a /CAS cycle must not be attempted until the sense amplifiers have sensed the memory state, and /RAS must not be returned high until the storage cells have been refreshed. When /RAS is driven high, it must be held high long enough for precharging to complete.
Video DRAM (VRAM)
VRAM is a dual-ported variant of DRAM which was once commonly used to store the frame-buffer in some graphics adaptors.
It was invented in 1980 by F. Dill and R. Matick at IBM Research in 1980, with a patent issued in 1985 (US Patent 4,541,075). The first commercial use of VRAM was in the high resolution graphics adapter introduced in 1986 by IBM with the PC/RT system.
VRAM has two sets of data output pins, and thus two ports that can be used simultaneously. The first port, the DRAM port, is accessed by the host computer in a manner very similar to traditional DRAM. The second port, the video port, is typically read-only and is dedicated to providing a high-speed data channel for the graphics chipset.
Typical DRAM arrays normally access a full row of bits (i.e. a word line) at up to 1024 bits at one time, but only use one or a few of these for actual data, the remainder being discarded. Since DRAM cells are destructively read, each bit accessed must be sensed, and re-written. Thus, typically, 1024 sense amplifiers are typically used. VRAM operates by not discarding the excess bits which must be accessed, but making full use of them in a simple way. If each horizontal scan line of a display is mapped to a full word, then upon reading one word and latching all 1024 bits into a separate row buffer, these bits can subsequently be serially streamed to the display circuitry. This will leave access to the DRAM array free to be accessed (read or write) for many cycles, until the row buffer is almost depleted. A complete DRAM read cycle is only required to fill the row buffer, leaving most DRAM cycles available for normal accesses.
Such operation is described in the paper "All points addressible raster display memory" by R. Matick, D. Ling, S. Gupta, and F. Dill, IBM Journal of R&D, Vol 28, No. 4, July 1984, pp379-393. To use the video port, the controller first uses the DRAM port to select the row of the memory array that is to be displayed. The VRAM then copies that entire row to an internal row-buffer which is a shift-register. The controller can then continue to use the DRAM port for drawing objects on the display. Meanwhile, the controller feeds a clock called the shift clock (SCLK) to the VRAM's video port. Each SCLK pulse causes the VRAM to deliver the next item of data, in strict address order, from the shift-register to the video port. For simplicity, the graphics adapter is usually designed so that the contents of a row, and therefore the contents of the shift-register, corresponds to a complete horizontal line on the display.
In the late 1990s, standard DRAM technologies (e.g. SDRAM) became cheap, dense, and fast enough to completely displace VRAM, even though it was only single-ported and some memory bits were wasted.
Fast Page Mode DRAM (FPM)
Fast page mode DRAM is also called FPM DRAM, Page mode DRAM, Fast page mode memory, or Page mode memory.
A 256Kx4 DRAM on an early PC memory card
In page mode, a row of the DRAM can be kept "open" by holding /RAS low while performing multiple reads or writes with separate pulses of /CAS. so that successive reads or writes within the row do not suffer the delay of precharge and accessing the row. This increases the performance of the system when reading or writing bursts of data.
Static column is a variant of page mode in which the column address does not need to be strobed in, but rather, the address inputs may be changed with /CAS held low, and the data output will be updated accordingly a few nanoseconds later.
Nibble mode is another variant in which four sequential locations within the row can be accessed with four consecutive pulses of /CAS. The difference from normal page mode is that the address inputs are not used for the second through fourth /CAS edges; they are generated internally starting with the address supplied for the first /CAS edge.
CAS before RAS refresh
Classic asynchronous DRAM is refreshed by opening each row in turn. This can be done by supplying a row address and pulsing /RAS low; it is not necessary to perform any /CAS cycles. An external counter is needed to iterate over the row addresses in turn.
For convenience, the counter was quickly incorporated into RAM chips themselves. If the /CAS line is driven low before /RAS (normally an illegal operation), then the DRAM ignores the address inputs and uses an internal counter to select the row to open. This is known as /CAS-before-/RAS (CBR) refresh.
Window RAM (WRAM)
Window RAM or WRAM is an obsolete type of semiconductor computer memory that was designed to replace video RAM (VRAM) in graphics adapters. It was developed by Samsung and also marketed by Micron Technology, but had only a short market life before being superseded by SDRAM and SGRAM.
WRAM has a dual-ported dynamic RAM structure similar to that of VRAM, with one parallel port and one serial port, but has extra features to enable fast block copies and block fills (so-called window operations). It was often clocked at 50 MHz. It has a 32-bit wide host port to enable optimal data transfer in PCI and VESA Local Bus systems. Typically WRAM was 50% faster than VRAM, but with costs 20% lower. It is sometimes erroneously called Windows RAM, because of confusion with the Microsoft Windows operating systems, to which it is unrelated apart from the fact that window operations could boost the performance of windowing systems.
It was used by Matrox on both their MGA Millennium and Millennium II graphics cards.
Extended Data Out (EDO) DRAM
A pair of 32 MiB EDO DRAM modules.
EDO DRAM is similar to Fast Page Mode DRAM with the additional feature that a new access cycle can be started while keeping the data output of the previous cycle active. This allows a certain amount of overlap in operation (pipelining), allowing somewhat improved speed. It was 5% faster than Fast Page Mode DRAM, which it began to replace in 1993.
To be precise, EDO DRAM begins data output on the falling edge of /CAS, but does not stop the output when /CAS rises again. It holds the output valid (thus extending the data output time) until either /RAS is deasserted, or a new /CAS falling edge selects a different column address.
Single-cycle EDO has the ability to carry out a complete memory transaction in one clock cycle. Otherwise, each sequential RAM access within the same page takes two clock cycles instead of three, once the page has been selected. EDO's speed and capabilities allowed it to somewhat replace the then-slow L2 caches of PCs. It created an opportunity to reduce the immense performance loss associated with a lack of L2 cache, while making systems cheaper to build. This was also good for notebooks due to difficulties with their limited form factor, and battery life limitations. An EDO system with L2 cache was tangibly faster than the older FPM/L2 combination.
Single-cycle EDO DRAM became very popular on video cards towards the end of the 1990s. It was very low cost, yet nearly as efficient for performance as the far more costly VRAM.
EDO was sometimes referred to as Hyper Page Mode.
Burst EDO (BEDO) DRAM
An evolution of the former, Burst EDO DRAM, could process four memory addresses in one burst, for a maximum of 5-1-1-1, saving an additional three clocks over optimally designed EDO memory. It was done by adding an address counter on the chip to keep track of the next address. BEDO also added a pipelined stage allowing page-access cycle to be divided into two components. During a memory-read operation, the first component accessed the data from the memory array to the output stage (second latch). The second component drove the data bus from this latch at the appropriate logic level. Since the data is already in the output buffer, faster access time is achieved (up to 50% for large blocks of data) than with traditional EDO.
Although BEDO DRAM showed additional optimization over EDO, by the time it was available the market had made a significant investment towards synchronous DRAM, or SDRAM [2]. Even though BEDO RAM was superior to SDRAM in some ways, the latter technology gained significant traction and quickly displaced BEDO.
Multibank DRAM (MDRAM)
Multibank RAM applies the interleaving technique for main memory to second level cache memory to provide a cheaper and faster alternative to SRAM. The chip splits its memory capacity into small blocks of 256 kB and allows operations to two different banks in a single clock cycle.
This memory was primarily used in graphic cards with Tseng Labs ET6x00 chipsets, and was made by MoSys. Boards based upon this chipset often used the unusual RAM size configuration of 2.25 MiB, owing to MDRAM's ability to be implemented in various sizes more easily. This size of 2.25 MiB allowed 24-bit color at a resolution of 1024×768, a very popular display setting in the card's time.
Synchronous Graphics RAM (SGRAM)
SGRAM is a specialized form of SDRAM for graphics adaptors. It adds functions such as bit masking (writing to a specified bit plane without affecting the others) and block write (filling a block of memory with a single colour). Unlike VRAM and WRAM, SGRAM is single-ported. However, it can open two memory pages at once, which simulates the dual-port nature of other video RAM technologies.
SGRAM and SDRAM became the most popular types of DRAM at the end of the 1990s, and well into the first decade of the 2000s.
Synchronous Dynamic RAM (SDRAM)
Single Data Rate (SDR) SDRAM is a synchronous form of DRAM.
Direct Rambus DRAM (DRDRAM)
Direct RAMBUS DRAM (DRDRAM).....
Double Data Rate (DDR) SDRAM
Double data rate (DDR) SDRAM was a later development of SDRAM, used in PC memory beginning in 2000. DDR2 SDRAM was originally seen as a minor enhancement (based upon the industry standard single-core CPU) on DDR SDRAM that mainly afforded higher clock speeds and somewhat deeper pipelining. However, with the introduction and rapid acceptance of the multi-core CPU in 2006, it is generally expected in the industry that DDR2 will revolutionize the existing physical DDR-SDRAM standard. Further, with the development and anticipated introduction of DDR3 SDRAM in 2007, it is anticipated DDR3 will rapidly replace the more limited DDR and newer DDR2.
Pseudostatic RAM (PSRAM)
PSRAM or PSDRAM is dynamic RAM with built-in refresh and address-control circuitry to make it behave similarly to static RAM (SRAM). It combines the high density of DRAM with the ease of use of true SRAM.
Some DRAM components have a "self-refresh mode". While this involves much of the same logic that is needed for pseudo-static operation, this mode is often equivalent to a standby mode. It is provided primarily to allow a system to suspend operation of its DRAM controller to save power without losing data stored in DRAM, not to allow operation without a separate DRAM controller as is the case with PSRAM.
An embedded variant of pseudostatic RAM is sold by MoSys under the name 1T-SRAM. It is technically DRAM, but behaves much like SRAM. It is used in Nintendo Gamecube and Wii consoles.
1T DRAM
Unlike all of the other variants described here, 1T DRAM is actually a different way of constructing the basic DRAM bit cell. 1T DRAM is a "capacitorless" bit cell design that stores data in the parasitic body capacitor that is an inherent part of Silicon on Insulator transistors. Considered a nuisance in logic design, this floating body effect can be used for data storage. Although refresh is still required, reads are non-destructive; the stored charge causes a detectable shift in the threshold voltage of the transistor.Sallese, Jean-Michel (2002-06-20). "Principles of the 1T Dynamic Access Memory Concept on SOI". MOS Modeling and Parameter Extraction Group Meeting. Retrieved on 2007-10-07.
1T DRAM is commercialized under the name ZRAM.
Note that classic one-transistor/one-capacitor (1T/1C) DRAM cell is also sometimes referred to as "1T DRAM".
RLDRAM
Reduced Latency DRAM is a high speed double data rate (DDR) SDRAM that combines fast, random access with high bandwidth. RLDRAM is mainly designed for networking and caching applications.
Security
Despite dynamic memory requiring power and refreshments to maintain its data with negligible error, the data is still retained until the capacitors are discharged, which is not automatic. Over a period of time (typically minutes), dependent on the properties of the semiconductor and temperature, the data will decay and eventually be lost.
This property can be used to recover "secure" data kept in memory by quickly rebooting the computer and dumping the contents of the RAM or by cooling the chips and transferring them to a different computer. Such an attack was demonstrated to circumvent Microsoft's BitLocker Drive Encryption[6].
See also
· DRAM price fixing
· DIMM
· Flash memory
· Regenerative capacitor memory
· Static random access memory
· List of device bandwidths
References
1. ^ http://www.oldcalculatormuseum.com/toshbc1411.html
2. ^ http://parts.jpl.nasa.gov/docs/DRAM_Indiv-00.pdf
3. ^ http://download.micron.com/pdf/datasheets/dram/d47b.pdf
4. ^ http://www.corsairmemory.com/corsair/products/specs/cmx1024-3200.pdf
5. ^ http://www.corsairmemory.com/corsair/products/specs/twinx1024-3200xl.pdf
6. ^ Center for Information Technology Policy » Lest We Remember: Cold Boot Attacks on Encryption Keys. 080222 citp.princeton.edu
External links
· Basic DRAM operation has some interesting historical trend charts of cell size and DRAM density from 1995.
· Back to Basics - Memory, part 3
· Benefits of Chipkill-Correct ECC for PC Server Main Memory - A 1997 discussion of SDRAM reliability - some interesting information on "soft errors" from cosmic rays, especially with respect to Error-correcting code schemes
· a Tezzaron Semiconductor Soft Error White Paper 1994 literature review of memory error rate measurements.
· Soft errors' impact on system reliability - Ritesh Mastipuram and Edwin C Wee, Cypress Semiconductor, 2004
· Scaling and Technology Issues for Soft Error Rates - A Johnston - 4th Annual Research Conference on Reliability Stanford University, October 2000
· Challenges and future directions for the scaling of dynamic random-access memory (DRAM) - J. A. Mandelman, R. H. Dennard, G. B. Bronner, J. K. DeBrosse, R. Divakaruni, Y. Li, and C. J. Radens, IBM 2002
· Ars Technica: RAM Guide
· Versatile DRAM interface for the 6502 CPU
· David Tawei Wang (2005). "Modern DRAM Memory Systems: Performance Analysis and a High Performance, Power-Constrained DRAM-Scheduling Algorithm". PhD thesis, University of Maryland, College Park. Retrieved on 2007-03-10. A detailed description of current DRAM technology.
· The Toshiba "Toscal" BC-1411 Desktop Calculator - An early electronic calculator that uses a form of dynamic RAM built from discrete components.
· Mitsubishi's 3D-RAM And Cache DRAM incorporate high-speed, on-board SRAM cache
· Multi-port Cache DRAM - MP-RAM
· DRAM timings explained
Retrieved from "http://en.wikipedia.org/wiki/Dynamic_random_access_memory"
Categories: Computer memory History of computing hardware
Hidden categories: Articles with sections needing expansion All articles with unsourced statements Articles with unsourced statements since February 2007 Cleanup from November 2006 All pages needing cleanup Cleanup from May 2007
สมัครสมาชิก:
ส่งความคิดเห็น (Atom)
ไม่มีความคิดเห็น:
แสดงความคิดเห็น