News
Synopsys + NVIDIA = The New Moore’s Law
NVIDIA and Synopsys have forged a landmark multi-year strategic partnership, backed by a significant $2 billion investment from NVIDIA, aimed at revolutionizing chip design. This collaboration integrates NVIDIA’s GPU-accelerated computing platform with Synopsys’ leading electronic design automation (EDA) and IP portfolio. The goal is to dramatically accelerate chip design cycles, reduce power consumption, and enable the next generation of AI, automotive, and high-performance computing silicon. Key initiatives include a unified, cloud-native design environment, a new AI-driven EDA suite dubbed “Synopsys.ai Copilot,” and an open “NVIDIA-Synopsys Foundry Design Kit” for 2nm and Intel 18A process nodes. This alliance is poised to significantly reduce tapeout times, potentially from months to weeks, marking a pivotal shift in EDA.
- NVIDIA’s $2 billion investment fuses GPU acceleration with Synopsys’ EDA tools.
- Aims for 10-50x faster design speeds and reduced power consumption for AI and HPC.
- Introduces Synopsys.ai Copilot and an open NVIDIA-Synopsys Foundry Design Kit.

Launches
An Assistant to Ease Your Transition to PSS
Siemens EDA has launched Questa One’s Portable Stimulus Assist, a new GenAI-powered tool designed to streamline the adoption of the Portable Stimulus Specification (PSS) for verification engineers. Recognizing that PSS offers a more abstract and productive approach to defining verification intent compared to traditional UVM, this assistant leverages natural language prompts to guide users in building their own PSS models. This innovation aims to bridge the learning curve for engineers already familiar with UVM, facilitating a smoother transition to PSS. The article emphasizes that GenAI’s strength lies in enhancing engineering productivity by making complex flows more accessible, rather than entirely overhauling them.
- Questa One’s Portable Stimulus Assist uses GenAI to guide engineers in PSS model creation.
- Aims to enhance verification productivity by simplifying the transition from UVM to PSS.
- Leverages GenAI for practical, incremental advances in complex EDA flows.

Charts
Global Semiconductor Market to Grow by 16.0% in 2024 to Reach $611 Billion, Driven by AI and Memory, According to IDC
Recent analysis by IDC forecasts a significant rebound in the global semiconductor market, anticipating a robust 16.0% growth in 2024, pushing the market size to an estimated $611 billion. This strong resurgence is primarily attributed to surging demand from the Artificial Intelligence (AI) sector and a concurrent recovery in memory markets. The report highlights that this momentum is expected to continue into 2025, fueled by sustained investments in advanced computing infrastructure and the widespread deployment of AI across various industries. This forecast underscores AI’s pivotal role in shaping future semiconductor demand and driving innovation within the industry.
- Global semiconductor market expected to grow 16.0% in 2024, reaching $611 billion.
- Growth primarily driven by robust demand from AI and a recovering memory market.
- Indicates AI’s critical role in future semiconductor market expansion and innovation.
![]()
Research
LLMs on Analog In-Memory Computing Based Hardware (IBM Research, ETH Zurich)
Researchers from IBM Research and ETH Zurich have unveiled a groundbreaking technical paper introducing “Analog Foundation Models,” a novel approach to enable efficient execution of Large Language Models (LLMs) on Analog In-Memory Computing (AIMC) hardware. AIMC holds immense promise for improving speed and power efficiency in neural network inference but has traditionally faced hurdles with noisy computations and stringent quantization constraints. This work presents a general and scalable methodology that successfully adapts state-of-the-art LLMs, such as Phi-3-mini-4k-instruct and Llama-3.2-1B-Instruct, to maintain performance comparable to 4-bit weight, 8-bit activation baselines, even amidst analog noise. This breakthrough bridges the critical gap between high-capacity LLMs and energy-efficient analog hardware, paving the way for more sustainable AI applications.
- Introduces “Analog Foundation Models” for efficient LLM execution on AIMC hardware.
- Addresses challenges of noise and quantization in AIMC, retaining high LLM performance.
- Paves the way for energy-efficient foundation models and sustainable AI.

Insight
We Need to Turn Specs into Oracles for Agentic Verification
This insightful piece advocates for leveraging the advanced natural language understanding capabilities of Large Language Models (LLMs) to transform design specifications into “oracles” for agentic verification. Drawing on expertise from Shelly Henry (CEO of MooresLab), the article highlights the inherent limitations of current specifications, which often suffer from incompleteness, inconsistencies, and ambiguities, leading to verification errors. The proposed methodology involves an LLM-based agent that can digest evolving specifications, generate clarifying questions, and systematically refine verification requirements—potentially incorporating complex diagrams like timing or FSM charts. This shift aims to establish a central, robust source of truth for design intent, significantly reducing verification errors and enhancing the overall efficiency of agentic verification flows.
- Advocates using LLMs to transform design specifications into “oracles” for verification.
- Addresses spec incompleteness, inconsistencies, and ambiguities in current design flows.
- LLM agents can refine specs, generate verification requirements, and enhance agentic verification efficiency.

Stay tuned for more essential updates and analysis shaping the future of the semiconductor industry.