QCT Leverages NVIDIA AI Enterprise Software Platform to Enhance AI Powerhouses

When we last took a close look at QCT (Quanta Cloud Technology), the data center, hyperscale and cloud server maker based in Taiwan, we pointed out that the company is a bigger player in the server industry than ….

AWS Announces EC2 UltraCluster and GA of Trainium2 Instances

LAS VEGAS, Dec. 3, 2024 — At AWS re:Invent, Amazon Web Services today announced the general availability of AWS Trainium2-powered Amazon Elastic Compute Cloud (Amazon EC2) instances, introduced new Trn2 UltraServers, enabling customers to train ….

HPC News Bytes 20241202: Do LLM’s Understand?, Agentic AI, Simulating the Universe, France Adding Reactors, TSMC 2nm Chips

A happy December start to you! From the world of HPC-AI, here’s a rapid (7:38) romp through recent news, including: LLMs and “emergent” understanding, collaborative Agentic AI, Frontier exascale ….

Oriole Networks Raises $22M for Photonics to Cut LLM Energy Use

London, 21st October: Oriole Networks – a company using light to train Large Language Models with low energy consumption – has raised an additional $22 million from investors to scale its “super-brain” solution.  The round was led by Plural with all existing investors – UCL Technology Fund, XTX Ventures, Clean Growth Fund, and Dorilton Ventures – reinvesting. Oriole Networks addresses […]

HPC News Bytes 20241014: AMD Rollout, Foxconn’s Massive AI HPC, AI Drives Nobels, Are LLM’s Intelligent?

A good mid-October morn to you! Here’s a brief (6:30) run-through of developments from the world of HPC-AI, including: AMD’s products rollout, Foxconn’s big Blackwell AI HPC in Taiwan, AI for science drives Nobel Prizes, Meta AI guru’s AGI skepticism

Cerebras Claims Fastest AI Inference

AI compute company Cerebras Systems today announced what it said is the fastest AI inference solution. Cerebras Inference delivers 1,800 tokens per second for Llama3.1 8B and 450 tokens per second for Llama3.1 70B, according to the company, making it 20 times faster than GPU-based solutions in hyperscale clouds.

NVIDIA and Google DeepMind Collaborate on LLMs

Intended to make it easier for developers to create AI-powered applications with world-class performance, NVIDIA and Google today announced three new collaborations at Google I/O ’24. Using TensorRT-LLM, NVIDIA is working with Google to optimize two new models it introduced at the event: Gemma 2 and PaliGemma. These models are built from the same research and […]

Amazon Adds $2.75B to Stake in GenAI Startup Anthropic

Amazon announced it has made its biggest-ever investment, $2.75 billion, in OpenAI/Chat-GPT competitor Anthropic, another indication that the generative AI phenomenon continues to heat up. Today’s news follows Amazon and Anthropic announcing an earlier $1.25 billion investment last September – the announcement today brings the total investment to $4 billion. “We have a notable history with […]

Oriole Networks Raises £10m for Faster LLM Training

London, 27 March 2024: Oriole Networks – a startup using light to train LLMs faster with less power – has raised £10 million in seed funding to improve AI performance and adoption, and solve AI’s energy problem. The round, which the company said is one of the UK’s largest seed raises in recent years, was co-led […]

Accelerated HPC for Energy Efficiency with AWS and NVIDIA

Many industries are starting to run HPC in the cloud. Find out how GPU-accelerated compute, from AWS and NVIDIA, is helping organizations run HPC workloads and AI/ML jobs faster, in a more energy-efficient way.