Home

News

Forums

Hardware

CPUs

Mainboards

Video

Guides

CPU Prices

Memory Prices

Shop



Sharky Extreme :

Auto Insurance
Quotes

Latest News


- Outdoor Life: Panasonic Puts 3G Wireless Into Rugged Notebooks
- Averatec Launches Lightweight Turion 64 X2 Laptop
- Acer Fires Up Two New Ferrari Notebooks
- Belkin Debuts Docking Station for ExpressCard-Equipped Notebooks
- Logitech 5.1 Speaker System Puts Your Ears At Eye Level
News Archives

Features

- SharkyExtreme.com: Interview with ATI's Terry Makedon
- SharkyExtreme.com: Interview with Seagate's Joni Clark
- Half-Life 2 Review
- DOOM 3 Review
- Unreal Tournament 2004 Review

Buyer's Guides

- September High-end Gaming PC Buyer's Guide
- September Value Gaming PC Buyer's Guide
- October Extreme Gaming PC Buyer's Guide

HARDWARE

  • CPUs


  • Motherboards

    - Gigabyte GA-965P-DS3 Motherboard Review
    - DFI LANPARTY UT nF4 Ultra-D Motherboard Review

  • Video Cards

    - Gigabyte GeForce 7600 GT 256MB Review
    - ASUS EN7900GT TOP 256MB Review
    - ASUS EN7600GT Silent 256MB Review
    - Biostar GeForce 7900 GT 256MB Review





  • SharkyForums.Com - Print: Memory bandwidth and today's graphics cards.

    Memory bandwidth and today's graphics cards.
    By GHz July 19, 2001, 09:59 PM

    I saw something in the General Games forum the other day which reminded me of the thread in this forum that talked about the use of RDRAM and its bandwidth for use with today's bandwidth hungry graphics processors, particularly the geForce family of GPUs. Lately I've been researching which would be a better uprgade for me, considering I upgrade my CPU about every year, and have considered the Pentium 4 since I also want a 2GHz processor. As we know, the memory bandwidth that comes with the i850 and i860 motherboards come from the fact that it is using a dual channel memory setup to double the theoretical bandwidth of one RDRAM module. My question is, why cant we implement this into graphics cards, except using the more popular DDR SDRAM as is used in graphics cards now? Would the use of dual channel memory in the graphics cards:
    1. Allow for slower cheaper DDR memory?
    2. Keep up with the bandwidth that today's graphics cards need?
    Intel had a great idea to boost memory bandwidth by implementing a dual channel memory system into their chipsets, why cant the graphics card companies do the same?

    By Moridin July 20, 2001, 09:50 AM

    If I'm not mistaken the GF3 does use something like dual channel. (Actually quad channel) IIRC it uses 4 separate memory controllers each of which accesses a 32-bit wide DDR memory interface.

    This still gives the same memory bandwidth as the single 128-bit wide memory interface on the GF2, but in some cases it does allow for more efficient use of that bandwidth. (If you wanted to load smaller chunks of data from different place in memory the 128-bit interface would be inefficient.) Overall though the difference is not spectacular.

    By leoku July 20, 2001, 03:21 PM

    quote:Originally posted by Moridin:
    If I'm not mistaken the GF3 does use something like dual channel. (Actually quad channel) IIRC it uses 4 separate memory controllers each of which accesses a 32-bit wide DDR memory interface.

    This still gives the same memory bandwidth as the single 128-bit wide memory interface on the GF2, but in some cases it does allow for more efficient use of that bandwidth. (If you wanted to load smaller chunks of data from different place in memory the 128-bit interface would be inefficient.) Overall though the difference is not spectacular.



    RDRAM has very bad latency problem that keeps it from the mainstream implementation. GF3 uses the crossbar memory architecture that can efficiently use the cheaper and faster DDR to achieve optimum bandwidth utilitzation.

    By Moridin July 20, 2001, 04:02 PM

    quote:Originally posted by leoku:
    [BRDRAM has very bad latency problem that keeps it from the mainstream implementation. GF3 uses the crossbar memory architecture that can efficiently use the cheaper and faster DDR to achieve optimum bandwidth utilitzation.[/B]

    Crossbar memory architecture is nVidia's terminology. It amounts to what I described above.

    Nobody said anything about RDRAM. Since you bring it up though RDRAM does have higher latencies then but the difference is not enormous. Aside from that, most of the activities should lend themselves nicely to prefetch, which can effectively make latency irrelevant.


    By SlartyB July 21, 2001, 12:24 AM

    quote:Originally posted by Moridin:
    Crossbar memory architecture is nVidia's terminology. It amounts to what I described above.

    Nobody said anything about RDRAM. Since you bring it up though RDRAM does have higher latencies then but the difference is not enormous. Aside from that, most of the activities should lend themselves nicely to prefetch, which can effectively make latency irrelevant.


    This is true, but larger latency means earlier pre-fetch which means longer prefetch pipelines which means bigger and more costly chips. In an environment like PC graphics cards, this is a big problem.

    By Moridin July 21, 2001, 04:43 PM

    quote:Originally posted by SlartyB:
    This is true, but larger latency means earlier pre-fetch which means longer prefetch pipelines which means bigger and more costly chips. In an environment like PC graphics cards, this is a big problem.


    I'm not quite sure what you mean. AFAIK Prefetch is basically a fire and forget operation, I don't think it even takes up a load/store slot. All that happens on a prefetch is that the appropriate line is loaded into (L2?) cache.

    From the memory controller's perspective I don't think it maters weather this was triggered by a load or a prefetch, it just fetches the cache line.

    If your using a software prefetech you have to make sure that it is sufficiently far in front of the actual operation you want the data for, but this doesn't seem to be what you are talking about.

    By SlartyB July 22, 2001, 12:10 PM

    quote:Originally posted by Moridin:

    I'm not quite sure what you mean. AFAIK Prefetch is basically a fire and forget operation, I don't think it even takes up a load/store slot. All that happens on a prefetch is that the appropriate line is loaded into (L2?) cache.

    From the memory controller's perspective I don't think it maters weather this was triggered by a load or a prefetch, it just fetches the cache line.

    If your using a software prefetech you have to make sure that it is sufficiently far in front of the actual operation you want the data for, but this doesn't seem to be what you are talking about.

    You are thinking along the right lines with the software prefetch in the last paragraph - but you have to realise that we are not talking about a piece of software running on a CPU, we are talking about a piece of hardware, where the instruction pipeline is fixed and any delays have to be physically acounted for in the silicon - hence the deeper pipeline I mentioned. If a longer latency is required to fetch a particular texture sample (for example), then the graphics chip has to process the requester of that data that much further ahead of time, which means it has to have a longer pipeline to account for the latency.

    By Moridin July 22, 2001, 12:54 PM

    After reading this I realize I was thinking more along the lines of a generalized MPU not a graphics chip.

    I would still think this problem is quite easily solved, after all modern GPU's are essentially just specialized micro-processors. Simple, well understood caching techniques should allow you this quite easily.

    By GHz July 22, 2001, 04:35 PM

    I believe that the nForce chipsets will implement a dual channel DDR SDRAM setting, correct me if I'm wrong, so what do you guys think is the possiblilty that nVidia will implement dual channel DDR SDRAM into their graphics cards, say in the geForce 4 or whatever they will call their next generation graphics cards.

    By 3dcgi July 22, 2001, 11:24 PM

    I agree with Moridin that the Geforce3 is actually quad channel. My understanding of dual channel chipsets is that they can talk to two DIMMs or RIMMs at the same time. Graphics cards don't have DIMMs, they just use memory chips. By talking with 4 groups of chips at the same time nvidia achieves quad channel performance.

    By SlartyB July 23, 2001, 02:03 AM

    quote:Originally posted by Moridin:
    After reading this I realize I was thinking more along the lines of a generalized MPU not a graphics chip.

    I would still think this problem is quite easily solved, after all modern GPU's are essentially just specialized micro-processors. Simple, well understood caching techniques should allow you this quite easily.


    Yes - it's not rocket science, but you still have to keep around more pending transactions than you would otherwise. This could introduce potential dependancy problems and makes access to memory more complex. Even if the the inceased complexity isn't that much, the graphics chips manufacturers (huh - who am I trying to kid, I mean nVidia ) are going to choose the easiest, cheapest solution that causes fewest verification headaches. I can't see that RDRAM fits that scenario very well.

    By SlartyB July 23, 2001, 02:10 AM

    quote:Originally posted by 3dcgi:
    I agree with Moridin that the Geforce3 is actually quad channel. My understanding of dual channel chipsets is that they can talk to two DIMMs or RIMMs at the same time. Graphics cards don't have DIMMs, they just use memory chips. By talking with 4 groups of chips at the same time nvidia achieves quad channel performance.

    Yes - it's quad channel. Have a read of nVidia's paper ...

    http://www.nvidia.com/docs/IO/84/ATT/LightspeedMemoryArchitecture.pdf

    By idris5 July 23, 2001, 09:40 AM

    Graphics memory access is highly deterministic and as such there is no reason why creating a long prefetch pipeline should be a major penalty from a 'memory misspredction' point of view.

    As far as transistor budget is concerned, I don't think it even comes into consideration when we are looking at 50 million transistor processors.

    The same comment goes for verification, but what do you mean by verifcation - formal? functional? STA Gate level? STA Tx Level (all of the above ) Regardless, I can't see why verifying a farily simple pipeline will be particularly difficult (the only disadvatage of a long prefetch pipeline as far as verification is concerned is that, it will only slightly increase the overall time to verify the rest of the chip - but a 2% increase or so shouldn't matter compared to the advantages Rambus should possibly provide).

    I think that the performance beifits that could be gained by using RDRAM outweigh the disadvantages in terms of chip design - but then again, Nvidia aren't going that route so there may be other (possibly political) reasons for this.

    By elimc July 27, 2001, 01:45 AM

    Slarty B,

    You say that latency is a problem with RDRAM. But, the way I understand it is as the bandwidth load increases DDR SDRAM latency increases drasticly while RDRAM remains on a plain. Since graphics cards a re greatly constrained by bandwidth then why wouldnt RDRAM be a good solution. And, do you think graphics cards will move to RDRAM in the future since DDR gets more complicated as it is clocked higher?

    By SlartyB July 27, 2001, 11:57 AM

    quote:Originally posted by elimc:
    Slarty B,

    You say that latency is a problem with RDRAM. But, the way I understand it is as the bandwidth load increases DDR SDRAM latency increases drasticly while RDRAM remains on a plain.

    Could you explain this or give a link to somewhere that does ? I am not aware of any such effect. I doubt very much if RDRAM will ever be adopted in a main-stream video product due to cost and complexity (the interface will be harder to get right since it is running faster).

    By elimc July 27, 2001, 12:23 PM

    Here you go Slarty B:

    http://www.anandtech.com/showdoc.html?i=1245&p=3


    This graph just shows latency for SDRAM and not DDR SDRAM like I origionally thought. Whoops! But I imagine as bandwidth increases DDR SDRAM would have the same problem. Especially since I dont see DDR scaling very well. I think they will have to move to a new design for DDRII. They are having problems getting DDRI out of the door right now. This is why the chipsets have been delayed with DDR memory.

    By elimc July 27, 2001, 12:32 PM

    Oh, heres a more relevant link in the same article:

    http://www.anandtech.com/showdoc.html?i=1245&p=5


    Basically it says DDR SDRAM is the ideal solution right now because of high bandwidth and low latency. It also says in the future RDRAM will be primed to take over when pin count becomes an issue. BTW, I think DDR SDRAM will have more problems than just pincount as it gets faster (complexity, crosstalk, etc . . .) So looking ahead to the future beyond DDR do you see RDRAM moving into graphics chips? I mean the PS2 uses RDRAM. Not sure if this is similar in any way.

    By SlartyB July 27, 2001, 05:37 PM

    quote:Originally posted by elimc:
    Oh, heres a more relevant link in the same article:

    http://www.anandtech.com/showdoc.html?i=1245&p=5


    Basically it says DDR SDRAM is the ideal solution right now because of high bandwidth and low latency. It also says in the future RDRAM will be primed to take over when pin count becomes an issue. BTW, I think DDR SDRAM will have more problems than just pincount as it gets faster (complexity, crosstalk, etc . . .) So looking ahead to the future beyond DDR do you see RDRAM moving into graphics chips? I mean the PS2 uses RDRAM. Not sure if this is similar in any way.

    Those are interesting articles, but not really relavent. They are talking about latency through motherboard chipsets. That article has nothing to do with the latency of the actual devices themselves - just the overall system latency. Since graphics chip manufacturers design their own bespoke memory controllers, they have much more control over memory accesses - which are much more of a direct interface compared to the proxy interface that happens on a motherboard.

    The potential problems for DDR that you mention are not unique to that type of memory. If anything, I would have said RDRAM is more problematical due to the higher clock speed. Careful PCB design and layout is more critical for RDRAM than DDR SDRAM. That said, as DDR clock speeds increase, then these factors will dominate more, but no more than the existing problems with RDRAM.

    By elimc July 27, 2001, 06:54 PM

    I see. Thanks for the info

    By elimc July 28, 2001, 01:35 AM

    Originally posted by elimc:


    I see. Thanks for the info


    Now I'm not sure if anyone has done this before, but I'm going to disagree with my own quote, lol. But, I actually don't really understand some of your points.

    First of all, the PCB design and layout is already done, as I understand it. Rambus has done all the research first and requires anyone who uses RDRAM to follow a strict set of guidelines. This is good for the company because they don't need as many engineers to test the design. So these engineers can be laid off which saves some money. Maybe this is part of the reason RDRAM hasn't really been introduced in graphics cards because the engineers haven't really wanted to push for it . . ? The layout and design problems are being solved as we speak and I don't see this as hindering entry into the market.

    Second, what problems are you referring to when you mention RDRAM technology? The problems of DDR SDRAM: ground bounce, electromagnetic interference, common mode noise, etc . . . affect RDRAM negligibly. As clock speed rise DDR is affected more and more. RDRAM doesn't face these problems as much.

    Since we are facing bandwidth problems right now wouldn't it make sense for us to use Rambus right now? Wait a second . . . you mentioned something about economics. RDRAM costs quite a bit more than DDR right now. In a cutthroat business like graphics, margins are very important. I concede that point, but what about in the future? RDRAM prices are falling all the time while RDRAM keeps speeding up. It has lapped DDR and the lead is only going to grow bigger in the future. Rambus is already addressing cost needs with their 4iRDRAM and better verification tools. And even if Rambus still costs too much for graphics card vendors then what about using it in the high end? A graphics card with Rambus would certainly alleviate memory constraints by a good factor. Even if it added 200$ to the price of a card wouldn't the high end still buy it if it's the best? Oh, I already mentioned the PS2 which happens to be selling pretty well. According to Sony RDRAM saved them a lot of money. Why wouldn't this be similar to what could happen on a graphics card for your computer? I don't know the die penalty of Rambus, but I do know it has less memory controllers and trace pathways. So, in the future RDRAM will get faster, cheaper, and suffer less latency than now. Plus you can fire engineers, which would save more money. Right . . ?

    So, what's left? Maybe it's a management decision that just hasn't been implemented yet? Maybe I'm completely wrong? I do understand some of the basics, but I don't know much about graphics cards/memory.

    Someone tell me why RDRAM is not going to be used in the future.

    Wow, long post! *My head explodes*

    By SlartyB July 28, 2001, 12:14 PM

    quote:Originally posted by elimc:
    Originally posted by elimc:


    I see. Thanks for the info

    You're welcome

    quote:
    .....
    First of all, the PCB design and layout is already done, as I understand it. Rambus has done all the research first and requires anyone who uses RDRAM to follow a strict set of guidelines. This is good for the company because they don't need as many engineers to test the design. So these engineers can be laid off which saves some money. Maybe this is part of the reason RDRAM hasn't really been introduced in graphics cards because the engineers haven't really wanted to push for it . . ? The layout and design problems are being solved as we speak and I don't see this as hindering entry into the market.

    It is precisely because of those strict guidlines that it is not being adopted. Unless you follow the guidelines exactly, you design is unlikely to work. This puts severe limitations on the morphology of your circuit.

    quote:
    Second, what problems are you referring to when you mention RDRAM technology? The problems of DDR SDRAM: ground bounce, electromagnetic interference, common mode noise, etc . . . affect RDRAM negligibly. As clock speed rise DDR is affected more and more. RDRAM doesn't face these problems as much.

    Ummmm.... what?! Do you think the laws of physics are different for RDRAM than they are for any other type of electronic circuit? RDRAM suffers from all the things you state, which is why there are strict guidelines as to how you lay out the chips and terminate the busses.

    quote:
    Since we are facing bandwidth problems right now wouldn't it make sense for us to use Rambus right now? Wait a second . . . you mentioned something about economics. RDRAM costs quite a bit more than DDR right now. In a cutthroat business like graphics, margins are very important. I concede that point, but what about in the future? RDRAM prices are falling all the time while RDRAM keeps speeding up. It has lapped DDR and the lead is only going to grow bigger in the future. Rambus is already addressing cost needs with their 4iRDRAM and better verification tools. And even if Rambus still costs too much for graphics card vendors then what about using it in the high end? A graphics card with Rambus would certainly alleviate memory constraints by a good factor. Even if it added 200$ to the price of a card wouldn't the high end still buy it if it's the best? Oh, I already mentioned the PS2 which happens to be selling pretty well. According to Sony RDRAM saved them a lot of money. Why wouldn't this be similar to what could happen on a graphics card for your computer? I don't know the die penalty of Rambus, but I do know it has less memory controllers and trace pathways. So, in the future RDRAM will get faster, cheaper, and suffer less latency than now. Plus you can fire engineers, which would save more money. Right . . ?

    Wrong. Firing engineers is the wrong thing to do. Do you think companies like nVidia, ATI or Creative are going to pay Rambus to layout their boards for them in preference to using their own engineers ? No, they won't.

    quote:

    So, what's left? Maybe it's a management decision that just hasn't been implemented yet? Maybe I'm completely wrong? I do understand some of the basics, but I don't know much about graphics cards/memory.

    Someone tell me why RDRAM is not going to be used in the future.

    Wow, long post! *My head explodes*


    I never said that Rambus would NEVER be used for ANY graphics card product, just that I thought it unlikely we will see it in a "mainstream" graphics card any time soon. Personally, I think the differences between DDR SDRAM and RDRAM will blur, with DDR catching up with RDRAM in terms of transfer speed and RDRAM coming down in price. I also predict that Rambus will issue even more lawsuits before it's done.

    By Kunal July 28, 2001, 01:52 PM

    RD-RAM can of course be used in graphics boards, but when it comes to production, its less viable that using SD/DDr.

    I don't think we'll ever see a RDRAM graphics board, especially since DDr will evolve in to DDrII and QDR in the next 24 months.

    By elimc July 28, 2001, 08:32 PM

    Originally posted by Slarty B:

    It is precisely because of those strict guidelines that it is not being adopted. Unless you follow the guidelines exactly, you design is unlikely to work. This puts severe limitations on the morphology of your circuit.


    I see. So mostly it's a business decision whether to use the in house design team or to subcontract the design to Rambus. Right . . ? Sony figured it was better for them to go with RDRAM rather than DDR SDRAM.


    Originally posted by Slarty B:

    Ummmm.... what?! Do you think the laws of physics are different for RDRAM than they are for any other type of electronic circuit? RDRAM suffers from all the things you state, which is why there are strict guidelines as to how you lay out the chips and terminate the busses.


    No I don't think the laws of physics are different for Rambus. But let's face it:
    RRDRAM is a superior signaling technology compared to DDR as we know it.

    1) Pseudo Differential Signaling keeps Differential Signaling and Common Mode Noise down, and is superior to DDRs solution of overpowering the noise by increasing the threshold voltage.

    As clock frequencies increase then the differential noise increases and so ground bounce increases as well! Rambus has solved this problem with a common mode rejection scheme.

    2) Lower voltage swings help the same thing.
    Rambus voltage swing: 0.8V
    DDR SDRAM voltage swing: 2.5V

    3) Higher tolerances in Rambus are suited for scaling to higher speeds.

    4) RDRAM has much better channel to channel line isolation because of ground dams. This also allows RDRAM to be manufactured on 4 layer boards, which saves money.


    You can find this information in many white papers at the Rambus web site.

    And, since RDRAM gives you power savings wouldn't that decrease some power consumption overall for the graphics chip? This would be good for the mobile segment where you also do not use, as much RAM and die estate is more important. This would decrease cost of the chip I would think, but this is not my area of expertise and I am probably missing some things here.


    Originally by Slarty B:

    Wrong. Firing engineers is the wrong thing to do. Do you think companies like nVidia, ATI or Creative are going to pay Rambus to layout their boards for them in preference to using their own engineers ? No, they won't.


    Well, I wasn't really serious about firing engineers. In the end it's a management decision to go with Rambus or not. The GeForce and Radeon are using DDR in some of their boards while Sony decided to use Rambus. And I've heard that the XBA architecture from Bit Boys uses SDRAM. Meanwhile some of the GeForce and Radeon cards also use SDRAM. Whatever floats your boat?


    Originally posted by Kunal:

    I don't think we'll ever see a RDRAM graphics board . . .


    AHHHHHHHHHH!!!! PS2!!!!


    Origionally posted by Kunal:

    . . . especially since DDr will evolve in to DDrII and QDR in the next 24 months.


    Well, if they are able move on to DDRII they will have to use a different design. Perhaps they will move to something similar to RDRAM. I think DDR will have to become more serial if they increase the clock speeds much. If they used today's DDR standards with QDR, hmmmmm . . . that would not be a very elegant solution.

    Seems like everything is moving to serial these day: Infiniband, ATA, Hyper Transport (wouldn't be surprised if they were paying royalties to Rambus), USB, Firewire, RapidIO?, and PCI-X?

    I've come up with some more crazy thoughts!

    One bottleneck is the Northbridge. I believe nVidia is trying to solve this bottleneck by integrating the graphics card to the nForce motherboard. If this is not what they are doing with nForce, well it would be a good idea. Adding a sound card to the board would make sense too. Would this get rid of the Northbridge. This means in the future graphics cards as we know them today would be integrated into the motherboard, soldered on? To connect the CPU and the graphics card it would be best to use something like RDRAM or Hyper Transport. DDR as we know it wouldn't work to well. Maybe they would use QDR RDRAM technology, which they have already demonstrated, and PC1200 memory, which has also been demonstrated. This would give them speeds at about 20 GBs per second. That's insane. And this is all done on a 64-bit bus! Now I don't know that it's possible to make Rambus work on a 128-bit bus at that speed (probably not), but if they did it would result in 40GBs per second of bandwidth!!! This would spell the apocalypse! Taking out the Northbridge could send graphics cards as we know them the way of hard drive and CD ROM controller cards!

    What do you guys think about all of this? I recommend that we all become nuns.

    By idris5 July 29, 2001, 10:27 AM

    quote:Since graphics chip manufacturers design their own bespoke memory controllers, they have much more control over memory accesses - which are much more of a direct interface compared to the proxy interface that happens on a motherboard.

    There is no difference between designing a GPU memory controller and a North Bridge based memory controller - why should there be?

    quote:Plus you can fire engineers, which would save more money. Right . . ?

    Ha, ha, ha.......no

    There is a headcount freeze in the industry generally at the moment, but recruiting in deep sub micron design for the GPU's and even high speed PCB design is still occuring partly because it is difficult to get this type of engineer - have a look at the EETimes salary survey and you'll find that deep sub-micron designers are the most highly paid in the industry (averaging more than $100,000 in the US and almost as high in Europe).

    quote:Sony figured it was better for them to go with RDRAM rather than DDR SDRAM.

    The PS2 suits Rambus as there is a constant stream of data required by the EE - a similar case to GPU's in my opinion, hence why I think Rambus is suited to graphics.

    quote:No I don't think the laws of physics are different for Rambus. But let's face it:
    RRDRAM is a superior signaling technology compared to DDR as we know it.

    Look at the reasons you wrote after this sentence.

    The first shows that Rambus needs a more complicated signalling system to overcome the problems created by operating at such high frequencies (more complicated does usually equal better).

    The second is a direct function of the type of signal system that Rambus uses and is therfore not really comparable with DDR.

    The third is plain wrong - if Rambus is more scalable then why have we seen it stay at a maximum of 800MHz in all computing areas whereas DDR has gone from around 333MHz to over 500MHz in the same time scale?

    The forth is partly valid, but in a similar way to the first comment, Rambus has to use this technique as it is more susceptable to noise if you don't use it.

    quote:the end it's a management decision to go with Rambus or not.

    Yeah, I'd agree with that.


    By elimc July 29, 2001, 01:06 PM

    First of all, I wasn't really serious about firing engineers. I was mostly noting that their job wouldn't be needed because of strict layout guidelines by Rambus. I've seen the EETs salary survey too and I'm sure the company would find something for them to work on.

    Originally quoted by Idris5:

    The PS2 suits Rambus as there is a constant stream of data required by the EE - a similar case to GPU's in my opinion, hence why I think Rambus is suited to graphics.

    I'm thinking the same thing.

    Originally quoted by Idris5:

    The first shows that Rambus needs a more complicated signalling system to overcome the problems created by operating at such high frequencies (more complicated does usually equal better).

    I assume you mean, "more complicated does not equal better." How is DDR less complex than RDRAM?

    Rambus = 16 data lines
    DDR = 64 data lines

    Then there are voltage lines, control lines, and address lines. DDR has a lot more of these lines too. DDR is very complex which is why the DDR compatible motherboards were delayed.

    Originally posted by Idris5:

    The second is a direct function of the type of signal system that Rambus uses and is therfore not really comparable with DDR.

    How are the laws of physics different for DDR than for Rambus?

    Originally posted by Idris5:

    The third is plain wrong - if Rambus is more scalable then why have we seen it stay at a maximum of 800MHz in all computing areas whereas DDR has gone from around 333MHz to over 500MHz in the same time scale?

    Actually Rambus is moving to PC1066 and PC 1200 next year. Also they have demonstrated quad channel RDRAM which is very fast!

    BTW, maybe you can post a link to QDR DDR SDRAM or 500MHz DDR modules. I am not aware of this technology being shown working.

    Originally posted by Idris5:

    The forth is partly valid, but in a similar way to the first comment, Rambus has to use this technique as it is more susceptable to noise if you don't use it.

    Yep. And so RDRAM has superior channel to channel isolation.

    What I've gotten out of this conversation is that moving to RDRAM in graphics cards is more of a management issue than a technical one. Anything I've missed?

    PS:

    Here is a short document describing next generation Rambus technology which has already been demonstrated. It includes PC1200, PC1066, and QRSL.
    http://www.rambus.com/developer/downloads/Rambus_Signaling_Technology_V7.

    By elimc July 29, 2001, 01:17 PM

    Heres the real address:
    http://www.rambus.com/developer/downloads/Rambus_Signaling_Technology_V7.pdf


    The .pdf part was cut off before for some reason.

    By idris5 July 30, 2001, 04:10 AM

    At work, so need to be quick:

    quote:I assume you mean, "more complicated does not equal better."

    Thanks - typing faster than thinking (which doesn't bode well).

    quote:How is DDR less complex than RDRAM?

    1) Rambus needs a more complicated signalling system.

    2) Rambus dies are more complicated (hence why their yields are lower and therefore one reason why the price is higher).

    quote:Then there are voltage lines, control lines, and address lines. DDR has a lot more of these lines too.

    I don't believe DDR has more address lines and, but you are correct on power and data.

    quote:How are the laws of physics different for DDR than for Rambus?

    Sarcasm will get you nowhere

    My point was merely that the way Rambus is designed it can use this kind of signalling system. As an analogy, you could buy a Corvette with an eight litre engine that will do 150mph or you could buy a Mitsubishi Evo 7 with a 2 litre engine which will also do 150mph (both ways work, but they get there differently).

    quote:Actually Rambus is moving to PC1066 and PC 1200 next year. Also they have demonstrated quad channel RDRAM which is very fast!

    Rambus is going to - they haven't yet. Whereas DDR has already showed a very good scaling potential. Also, quad channel is a function of the architecture, not the signalling system (but nonetheless, it does show an area of superiority for Rambus).

    quote:Yep. And so RDRAM has superior channel to channel isolation.

    Yes, because it has to. Lets go back to my car analogy (which you may not like, I don't know!?). If I design a 200mph car I'm going to have to put tyres on it that are rated for that speed, whereas if I design a 120mph car I can put lower rated tyres on it.

    Why design specification into DDR that you don't need (all this does is boost cost)?

    Yes from a literal point of view, Rambus has a better channel isolation, but DDR doesn't need it so from one point of view it is a better architecture.


    Don't get me wrong, I like Rambus and in many ways it is better than DDR, but I'm not blind to its problem areas.

    By Satyr July 31, 2001, 10:42 AM

    I like your Peter Greenhalge quote.

    Not knowing much about RAM(networking guy here) I would have to bring up a very important point about design implementations. And that is Market share.... If a company has the market share it tends to stay that way unless they make a big mistake. The general consumer has adopted DDR ram because of it's increased speed over sd ram and it's almost equal price. The tech companies will give consumers what they want. Right now your average consumer thinks, "RDRAM is expensive which means bad. DDR ram is cheap which means good" Sure, there is the enthusiast market and the workstation market but neither of those markets is big enough to carry a company.

    It all comes down to what the market will bear. VHS and BETA....

    By elimc July 31, 2001, 08:40 PM

    Originally posted by Idris5:

    Rambus is going to - they haven't yet. Whereas DDR has already showed a very good scaling potential. Also, quad channel is a function of the architecture, not the signalling system (but nonetheless, it does show an area of superiority for Rambus).


    In graphics cards it has scaled well, but I wonder how far they can push DDR. I guess we will see in the future, won't we?

    As for car analogies, I see it this way: Rambus is in fifth gear while DDR SDRAM is in third gear. DDR is good for stop and go but can't go as fast as RDRAM. RDRAM is slow at lower speeds, but once it picks up it can maintain higher speeds well.

    As for Satyrs comments, I agree with most of it. It is starting to seem to me that moving to Rambus is a management and economical decision, not a technical one. Also, market share is important too. Right now ATI and nVidia are neck and neck graphics wise. Maybe they are afraid to use resources to move to Rambus. If one company makes a mistake and Rambus is not accepted because of cost or something then the other company will move forward.

    The only argument I don't agree with is the analogy between videotapes and RAM. This is not the same situation as VHS and Beta. Also, The niche markets actually are large enough to support companies, albeit small ones. ATI recently bought one: FireGL

    Some more crazy thoughts:
    I'm really going out on a limb here. Perhaps the nforce architecture from nVidia is not just a company diversifying, but seeing where the market is going and moving to that. Maybe they see stand-alone graphics cards being eliminated in the next few years. Maybe ATI sees this as well. I've read recently that ATI has licensed AMDs hyper-transport technology. Many people have assumed this would be used in case ATI went to a Radeon2 MAXX. If the northbridge is eliminated in the future, hyper-transport would make sense to be used in an integrated chipset like the nForce to connect the soldered GPU to the CPU. And it gets more interesting. A few months ago ATI signed a cross licensing deal with Intel. This gave them some technologies and also the sole right to manufacture integrated chipsets for the P4. I assume many people believe as I have that this is just creating a chipset with a small graphics core and not taking out the northbridge. The details of the cross-licensing deal haven't really been discussed and nobody has really cared. Hmmmm . . . could ATI be making their own chipset without a northbridge?

    Could nVidia be moving towards eliminating stand-alone graphics cards? Well, the advantages would be great and I don't see a reason why they wouldn't. Could ATI be making their own chipset with soldered on graphics cards starting with the P4? If they are, the product wouldn't be released commercially until late next year, or even later. The above is pure speculation.

    I wish I had a little bird that could give me this info.

    By elimc July 31, 2001, 08:44 PM

    Should i be posting my crazy thoughts in a new thread or does it matter?

    BTW, thanks for the information from all of you guys. I've really learned a lot.

    By 3dcgi July 31, 2001, 09:49 PM

    quote:Originally posted by elimc:
    Could nVidia be moving towards eliminating stand-alone graphics cards? Well, the advantages would be great and I don?t see a reason why they wouldn?t.

    I don't see any advantage here for nVidia. Graphics companies would be happy to keep selling discrete graphics chips, but market conditions force them to consider integrated parts. The margins are higher on discrete chips.

    Also integrated parts do not give more performance. Discrete chips can be bigger and faster because they do only graphics. The only performance advantage (I can think of) that an integrated part can provide is more bandwidth to system memory. This is probably why SGI uses a unified memory architecture. AGP8x will help alleviate this advantage, if it even is an advantage currently. Most people claim they can't tell a difference between performance when using AGP2x or 4x.

    By elimc August 01, 2001, 08:05 AM

    I'm not talking about the integrated graphics cards that we see today. I'm talking about soldering discrete cards like the GF3 directly onto the motherboard and eliminating the northbridge. In the future stand-alone graphics cards could be eliminated. The main advantage would be removing a bottleneck in the system.

    Now I'm not saying that this will happen over night, but perhaps in the next 2-5 years after AGP8x and AGP serial we will see this happen. nForce could just be the start of a trend. For instance, we are starting to get 64 bit CPUs, although 32 bit CPUs will still be around for awhile.

    This idea makes sense to me. What do you guys think?

    By idris5 August 01, 2001, 03:11 PM

    quote:Originally posted by Satyr:
    I like your Peter Greenhalge quote.

    ....heh thats me, but I didn't come up with the quote

    [edit]
    oh, and elimc, you might just as well post 'crazy' thoughts in this thread as we have a fairly interesting one going.

    By elimc August 02, 2001, 12:18 AM

    Well, what do you guys think? The idea is interesting.

    By idris5 August 02, 2001, 03:49 AM

    You still need a northbridge to arbitrate between the memory bus, PCI slots, AGP slot and hard discs. You won't want lots of buses coming out of the MPU - just one - so you have to have a chip that splits data off this bus to the relevant destination.

    By elimc August 02, 2001, 05:24 PM

    WOOT WOOT!

    Check it out here guys:
    http://www.xbitlabs.com/news/story.html?id=996698004

    Here are the main points of the article:

    ATIs' chipset will be released in Q4. It is code named the A3 and is speculated to be designed for the P3, including the newer Tualatin core. It will be cheaper than the crush chipset because there are no tricks to try to increase the memory subsystem and the chipset will feature an integrated clock generator. The memory will be DDR. Instead of building a new southbridge, ATI will use someone else's southbridge via the PCI bus. It will have a UMA graphics core based on the RV200 core.

    Wow! I seem to recall an agreement between Intel and ATI for a P4 chipset as well. Maybe there will be an RDRAM chipset from ATI next year. This could all be the start of eliminating stand alone graphics cards. Right now the integrated graphics cores are small and are not really a better solution than a discrete graphics card. I wonder what the second-generation chipsets will look like.

    One of the things that I find interesting is a mobile version of the chipset. ATI already owns a significant share of the mobile graphics segment, now they will have a mobile chipset along with it. There would be some advantages to this board. Lower power consumption and integrated graphics would let it be cheaper. ATI could really take over the mobile market with this chipset. I wonder if there will be a mobile P4 with this chipset. Adding RDRAM to it would also allow even more power savings because of low voltage needs and the ability to shut down when not in use, not to mention some very fast notebooks.


    Originally posted by Idris5:
    You still need a northbridge to arbitrate between the memory bus, PCI slots, AGP slot and hard discs. You won't want lots of buses coming out of the MPU - just one - so you have to have a chip that splits data off this bus to the relevant destination.


    True, perhaps there will still be a northbridge but the graphics card wouldn't have to be connected to it. The graphics card could have its' own bus directly from the CPU allowing it to work as a parallel processor? Or maybe the graphics card will still be attached to a northbridge, but the northbridge will be connected to the CPU by something like RDRAM? The first solution seems like it would be faster. Is it more practical than increasing the speed of the northbridge?

    This isn't my area of expertise. Anyone have any ideas?


    Contact Us | www.SharkyForums.com

    Copyright 1999, 2000 internet.com Corporation. All Rights Reserved.


    Ultimate Bulletin Board 5.46

    previous page
    next page




    HardwareCentral
    Compare products, prices, and stores at Hardware Central!


    Copyright 2002 INT Media Group, Incorporated. All Rights Reserved. About INT Media Group | Press Releases | Privacy Policy | Career Opportunities