So it seems that the recent fashion for the CPU-knows-best is generally being re-thought; it's the return of the co-processor. While it makes sense, and at ngGrid we think it's compelling when using FPGAs in their sweet-spot, much of the coverage is somewhat lacking in depth.
In general the co-processors they speak of are GPUs (or possibly physics engines for games) but very little is mentioned of FPGAs, DSPs and the myriad of non-AMD/Intel/x86 hardware. In addition, while dropping another type of processor in a spare socket can give you a serious performance boost, I think this design shift could run into a while new architecture style. I don't think things are going to go all the way back to transputer's sea of processing elements (I think the granularity will still be quite chunky) it may finally bring some competition to the PC architecture. The PC standard is a huge hindrance to high performance computing in terms of hardware technology -- a cost which delivers amazingly cheap machines however.
Delivering the ability to innovate in architecture while remaining compatible with standard tools / binaries will be the key challenge going forward, in my opinion. Just dropping a single FPGA/GPU in a socket is not enough, you'll still want the local RAM (and lots of it) and associated IO in order to really jump ahead of homogeneous CPU architectures.
Another part of the equation missing, is real world experience and examples. While DRC and XtremeData have socket based FPGAs available today, I haven't seen much coverage of the benefits versus sitting on a PCI-foo bus. Of course, the lack of public FPGA successes in general is another matter altogether...
5 Mar 2007
Subscribe to:
Posts (Atom)