Keresés

Hirdetés

Új hozzászólás Aktív témák

  • VaniliásRönk

    nagyúr

    válasz #95904256 #15 üzenetére

    A nem x86-os fejlesztéseit eléggé elhanyagolta, ha jól tudom. (szerk.: gondolok itt az Itaniumra, mást nem hiszem hogy fel lehetne használni ilyen célra)
    Szerintem a Fusion miatt próbálják az x86-hoz kapcsolni, az teljesen x86 és DX kompatibilis lesz.

    [ Szerkesztve ]

    "Only two things are infinite, the universe and human stupidity, and I'm not sure about the former." (Albert Einstein)

  • P.H.

    senior tag

    válasz #95904256 #15 üzenetére

    Kifejezetten általános célokra szánják, kifejezetten x86 utasításkészletre alapozva. ("It will be easily programmable using many existing software tools, and designed to scale to trillions of floating point operations per second (Teraflops) of performance. The Larrabee architecture will include enhancements to accelerate applications such as scientific computing, recognition, mining, synthesis, visualization, financial analytics and health applications." [link], bár elég régi, ez újabb.) A jelenlegi x86-fordítókat minimális módosítással kompatibilissé lehet tenni; lehet, hogy 2009-ben pl. olyan Visual C fordítót kaphatsz, ami tud erre is fordítani.

    Ez a gondolatmenet így, tömören leírva kifejezetten szimpatikus:

    The Larrabee architecture could be characterized as the anti-GPU entry. The overall approach is an attempt to evolve the CPU into a terascale data parallel engine. According to Intel, Larrabee will be a manycore (i.e., more than 8 cores) device and will be based on a subset of the IA instruction set with some extra GPU-like instructions thrown in. Intel has not elaborated on how it intends to do this, but one could imagine super-sized SSE units with just enough x86 CPU silicon to enable general-purpose flow control and data access. The first product release will probably come in 2009, but Intel says it may have something to demo as early as next year.

    The idea behind Larrabee is to bring both traditional graphics processing and data parallel computing under the IA umbrella. I'm not going to talk about the traditional graphics side of the story here (I'll let the game weenies argue about the advantages of ray-tracing over rasterization.) What's interesting about Larrabee and its GPU brethren is the extent to which a graphics engine can become a general-purpose computing engine without compromising its performance.

    The combination of a data parallel engine with more of the general-purpose flexibility of a traditional CPU could offer a powerful model for scientific computing applications, which usually consist of an irregular mix of matrix math and other logic. One of the drawbacks of traditional GPUs is that they depend upon an accompanying CPU for virtually all of the non-vector logic. That's fine if the application divides neatly between a vector computing kernel and the rest of the application logic in such a way as to keep both types of processing engines busy. But if it doesn't, the software developer has to find a way to tease out enough parallelism for the GPU to make sending the vector data on a round trip from the CPU worthwhile. This will only get worse in the future, since chip-to-chip bus performance is not expected to keep pace with either CPU or GPU performance.

    [ Szerkesztve ]

    Arguing on the Internet is like running in the Special Olympics. Even if you win, you are still ... ˙˙˙ Real Eyes Realize Real Lies ˙˙˙

Új hozzászólás Aktív témák