There has been some activity in the FPGA realm lately.  First, Microsoft has published a paper at ISCA (a very well-known peer-reviewed computer architecture conference) about using FPGAs in datacenters for page ranking processing for Bing. In a test deployment, MS reported up to 95% more throughput for only 10% more power. The added total cost of ownership (TCO) was less than 30%. Microsoft used Altera Stratix V FPGAs in a PCIe form-factor with 8GB of DDR3 RAM on each board. The FPGAs were connected with each other in a 6x8 torus configuration using a 10Gb SAS network. Microsoft mentioned that it programmed the FPGAs in Verilog and that this hand-coding was one of the challenging aspects of the work. However, Microsoft believes using high-level tools such as Scala (presumably a domain-specific subset), OpenCL or "C-to-gates" tools such as AutoESL or ImpulseC might also be suitable for such jobs in the future. Microsoft appears to be pretty happy with the results overall and hopes to deploy the system in production in 2015.

Intel revealed plans to manufacture a CPU-FPGA hybrid chip that combines Intel's traditional Xeon-line CPU cores with FPGAs on a single chip. The FPGA and CPU will have coherent access to memory.  The exact details of the chip, such as number of CPU cores or the amount of logic and other resources of the FPGA, or even who is the source for the FPGAs (likely Altera), is not revealed. However, we do know that the chip will be package compatible with the existing Xeon E5 line. Intel mentions that FPGAs can deliver "up to 10x" the performance on unspecified industry benchmarks. Intel further claims its implementation will deliver another 2x improvement (so 20x total) because of coherency and lower CPU-FPGA latency. We will have to wait for more information about this product to validate any of Intel's claims. It will also be interesting what software and development tools Intel provides for this chip.

Finally, you may remember our previous coverage of OpenCL on Altera's FPGAs and we had mentioned that Xilinx had some plans for OpenCL as well.  Recently (~2 months ago) Xilinx updated its Vivado design suite and now includes "early access" support for OpenCL.

Overall, these announcements point to increased adoption of FPGAs in mainstream applications. Datacenters are very focused on performance per watt as they tend to be power limited, with increasing performance needs. Progress on scaling performance through multicore CPUs has slowed, and relying on GPUs to increase overall performance per watt has an upper bound as well. In a power constrained environment where two different types of general purpose processors are limited by progress on the process node side, we need to find another option to continue to scale performance. In an ideal world, one may design application-specific integrated circuits (ASICs) to get the highest performance/watt, but ASICs are hard to design and once deployed cannot be changed. This solution is not a good fit for datacenter applications, where the workload (such as algorithms for search) are tweaked and changed over time. FPGAs can offer a happy medium between CPUs and ASICs in that they offer programmable and reconfigurable hardware and can still offer a performance/watt advantage over CPUs because they effectively customize themselves to the algorithm. Making FPGAs accessible to more mainstream application programmers (i.e. those who are used to writing C, C++, Java etc. and not Verilog) will be one of the problems and tools such as OpenCL (and more) are gaining steam in this space.



View All Comments

  • nevertell - Saturday, June 21, 2014 - link

    The day anandtech has mouse-over popups in their articles is the day tech/computer journalism will be dead. Reply
  • tuxRoller - Saturday, June 21, 2014 - link

    These aren't ads, and properly created popovers will disappear on mouse leave the area. Or you could use js to force clicks to get the same effect (and it'll be mobile friendly).
    These kinds of things can be done pretty easily with a good cms.
  • Senti - Sunday, June 22, 2014 - link

    People are really lazy nowadays. It's not like you have to go to library now and spend several hours looking for such answers... Reply
  • nevertell - Sunday, June 22, 2014 - link

    The best they could do is just add a dictionary of the abbreviations and terms after every article that'd be linked to wikipedia or some other online resource. Popups are too disruptive. And in this day and age, if you can't be bothered to select a bit of text and use it to search for the selected text in a search engine of your choice to access the biggest vault of knowledge known to man, maybe you're not worthy of the knowledge you claim to be seeking. Reply
  • MrSpadge - Monday, June 23, 2014 - link

    Why force every reader into researching these things themselves when the author is an expert who could provide a short explanation right to the point that matters for the current article? Sure, it takes more time to write such additional information down, but it saves many others lot's of time and can probably often be reused. If this brief information is not enough everyone is obviously still encouraged to do their own search, adapted to their own interests.

    A mouse-over which disappears properly provides the information right where you need it - in contrast to a glossary, where you had to navigate to some other place to read, and then find the place again where you just left.
  • Flunk - Sunday, June 22, 2014 - link

    Adding a automatic clickable explanation system for acronyms would be a good idea. It would save author's time and help out less knowledgeable readers. Wouldn't be hard to implement either. Reply
  • p1esk - Sunday, June 22, 2014 - link

    What is the point of expanding FPGA? If you don't know what it stands for, the expansion won't help you understand what it is. Even a short explanation won't do much good if you're not familiar with basic concepts of digital design.
    I think the best way to handle such abbreviations is to link to the appropriate articles on Wikipedia.
  • potato32 - Saturday, June 21, 2014 - link

    Anyone want to hire an FPGA design engineer? Of course, I think that higher level code to gates translation would yield sub-optimal performance :) Reply
  • wintermute000 - Sunday, June 22, 2014 - link

    I'm no CS major nor do I design hardware but intuitively it seems like a properly implemented FPGA setup would of course beat out generic x86 for parallel tasks, the only question is the efficiency and logistics of porting/implementing such code (in a holistic sense e.g. include say cost and ease of hiring a bunch of verilog/OpenCL gurus vs generic programmers). Reply
  • ZeDestructor - Sunday, June 22, 2014 - link

    Not parallel, just endlessly repeatable. The most efficient core at times is a straight purely combinational single-op core. In practice though, people tend to go for a somewhat more modular design with a small number of ops available, simply to significantly simplify testing. Reply

Log in

Don't have an account? Sign up now