- Archives Page 1 | NVIDIA Blog https://34.214.249.23.nip.io/blog/category/pro-graphics/ Thu, 21 Nov 2024 18:27:05 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 Into the Omniverse: How Generative AI Fuels Personalized, Brand-Accurate Visuals With OpenUSD https://blogs.nvidia.com/blog/generative-ai-openusd-fuels-3d-product-configurators/ Thu, 21 Nov 2024 14:00:44 +0000 https://blogs.nvidia.com/?p=75883

Editor’s note: This post is part of Into the Omniverse, a blog series focused on how developers, 3D artists and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.

3D product configurators are changing the way industries like retail and automotive engage with customers by offering interactive, customizable 3D visualizations of products.

Using physically accurate product digital twins, even non-3D artists can streamline content creation and generate stunning marketing visuals.

With the new NVIDIA Omniverse Blueprint for 3D conditioning for precise visual generative AI, developers can start using the NVIDIA Omniverse platform and Universal Scene Description (OpenUSD) to easily build personalized, on-brand and product-accurate marketing content at scale.

By integrating generative AI into product configurators, developers can optimize operations and reduce production costs. With repetitive tasks automated, teams can focus on the creative aspects of their jobs.

Developing Controllable Generative AI for Content Production

The new Omniverse Blueprint introduces a robust framework for integrating generative AI into 3D workflows to enable precise and controlled asset creation.

Example images created using the NVIDIA Omniverse Blueprint for 3D conditioning for precise visual generative AI.

Key highlights of the blueprint include:

  • Model conditioning to ensure that the AI-generated visuals adhere to specific brand requirements like colors and logos.
  • Multimodal approach that combines 3D and 2D techniques to offer developers complete control over final visual outputs while ensuring the product’s digital twin remains accurate.
  • Key components such as an on-brand hero asset, a simple and untextured 3D scene, and a customizable application built with the Omniverse Kit App Template.
  • OpenUSD integration to enhance development of 3D visuals with precise visual generative AI.
  • Integration of NVIDIA NIM, such as the Edify 360 NIM, Edify 3D NIM, USD Code NIM and USD Search NIM microservices, allows the blueprint to be extensible and customizable. The microservices are available to preview on build.nvidia.com.

How Developers Are Building AI-Enabled Content Pipelines

Katana Studio developed a content creation tool with OpenUSD called COATcreate that empowers marketing teams to rapidly produce 3D content for automotive advertising. By using 3D data prepared by creative experts and vetted by product specialists in OpenUSD, even users with limited artistic experience can quickly create customized, high-fidelity, on-brand content for any region or use case without adding to production costs.

Global marketing leader WPP has built a generative AI content engine for brand advertising with OpenUSD. The Omniverse Blueprint for precise visual generative AI helped facilitate the integration of controllable generative AI in its content creation tools. Leading global brands like The Coca-Cola Company are already beginning to adopt tools from WPP to accelerate iteration on its creative campaigns at scale.

Watch the replay of a recent livestream with WPP for more on its generative AI- and OpenUSD-enabled workflow:

The NVIDIA creative team developed a reference workflow called CineBuilder on Omniverse that allows companies to use text prompts to generate ads personalized to consumers based on region, weather, time of day, lifestyle and aesthetic preferences.

Developers at independent software vendors and production services agencies are building content creation solutions infused with controllable generative AI and built on OpenUSD. Accenture Song, Collective World, Grip, Monks and WPP are among those adopting Omniverse Blueprints to accelerate development.

Read the tech blog on developing product configurators with OpenUSD and get started developing solutions using the DENZA N7 3D configurator and CineBuilder reference workflow.

Get Plugged Into the World of OpenUSD

Various resources are available to help developers get started building AI-enabled product configuration solutions:

For more on optimizing OpenUSD workflows, explore the new self-paced Learn OpenUSD training curriculum that includes free Deep Learning Institute courses for 3D practitioners and developers. For more resources on OpenUSD, attend our instructor-led Learn OpenUSD courses at SIGGRAPH Asia on December 3, explore the Alliance for OpenUSD forum and visit the AOUSD website.

Don’t miss the CES keynote delivered by NVIDIA founder and CEO Jensen Huang live in Las Vegas on Monday, Jan. 6, at 6:30 p.m. PT for more on the future of AI and graphics.

Stay up to date by subscribing to NVIDIA news, joining the community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X.

]]>
GPU’s Companion: NVIDIA App Supercharges RTX GPUs With AI-Powered Tools and Features https://blogs.nvidia.com/blog/ai-studio-app-geforce-rtx-remix/ Tue, 12 Nov 2024 14:00:50 +0000 https://blogs.nvidia.com/?p=75387

The NVIDIA app — officially releasing today — is a companion platform for content creators, GeForce gamers and AI enthusiasts using GeForce RTX GPUs.

Featuring a GPU control center, the NVIDIA app allows users to access all their GPU settings in one place. From the app, users can do everything from updating to the latest drivers and configuring NVIDIA G-SYNC monitor settings, to tapping AI video enhancements through RTX Video and discovering exclusive AI-powered NVIDIA apps.

In addition, NVIDIA RTX Remix has a new update that improves performance and streamlines workflows.

For a deeper dive on gaming-exclusive benefits, check out the GeForce article.

The GPU’s PC Companion

The NVIDIA app turbocharges GeForce RTX GPUs with a bevy of applications, features and tools.

Keep NVIDIA Studio Drivers up to date — The NVIDIA app automatically notifies users when the latest Studio Driver is available. These graphics drivers, fine-tuned in collaboration with developers, enhance performance in top creative applications and are tested extensively to deliver maximum stability. They’re released once a month.

Discover AI creator apps — Millions have used the NVIDIA Broadcast app to turn offices and dorm rooms into home studios using AI-powered features that improve audio and video quality — without the need for expensive, specialized equipment. It’s user-friendly, works in virtually any app and includes AI features like Noise and Acoustic Echo Removal, Virtual Backgrounds, Eye Contact, Auto Frame, Vignettes and Video Noise Removal.

NVIDIA RTX Remix is a modding platform built on NVIDIA Omniverse that allows users to capture game assets, automatically enhance materials with generative AI tools and create stunning RTX remasters with full ray tracing, including DLSS 3.5 support featuring Ray Reconstruction.

NVIDIA Canvas uses AI to turn simple brushstrokes into realistic landscape images. Artists can create backgrounds quickly or speed up concept exploration, enabling them to visualize more ideas.

Enhance video streams with AI — The NVIDIA app includes a System tab as a one-stop destination for display, video and GPU options. It also includes an AI feature called RTX Video that enhances all videos streamed on browsers.

RTX Video Super Resolution uses AI to enhance video streaming on GeForce RTX GPUs by removing compression artifacts and sharpening edges when upscaling.

RTX Video HDR converts any standard dynamic range video into vibrant high dynamic range (HDR) when played in Google Chrome, Microsoft Edge, Mozilla Firefox or the VLC media player. HDR enables more vivid, dynamic colors to enhance gaming and content creation. A compatible HDR10 monitor is required.

Give game streams or video on demand a unique look with AI filters — Content creators looking to elevate their streamed or recorded gaming sessions can access the NVIDIA app’s redesigned Overlay feature with AI-powered game filters.

Freestyle RTX filters allow livestreamers and content creators to apply fun post-processing filters, changing the look and mood of content with tweaks to color and saturation.

Joining these Freestyle RTX game filters is RTX Dynamic Vibrance, which enhances visual clarity on a per-app basis. Colors pop more on screen, and color crushing is minimized to preserve image quality and immersion. The filter is accelerated by Tensor Cores on GeForce RTX GPUs, making it easier for viewers to enjoy all the action.

Enhanced visual clarity with RTX Dynamic Vibrance.

Freestyle RTX filters empower gamers to personalize the visual aesthetics of their favorite games through real-time post-processing filters. This feature boasts compatibility with a vast library of more than 1,200 games.

Download the NVIDIA app today.

RTX Remix 0.6 Release

The new RTX Remix update offers modders significantly improved mod performance, as well as quality of life improvements that help streamline the mod-making process.

RTX Remix now supports the ability to test experimental features under active development. It includes a new Stage Manager that makes it easier to see and change every mesh, texture, light or element in scenes in real time.

To learn more about the RTX Remix 0.6 release, check out the release notes.

With RTX Remix in the NVIDIA app launcher, modders have direct access to Remix’s powerful features. Through the NVIDIA app, RTX Remix modders can benefit from faster start-up times, lower CPU usage and direct control over updates with an optimized user interface.

To the 3D Victor Go the Spoils

NVIDIA Studio in June kicked off a 3D character contest for artists in collaboration with Reallusion, a company that develops 2D and 3D character creation and animation software. Today, we’re celebrating the winners from that contest.

In the category of Best Realistic Character Animation, Robert Lundqvist won for the piece Lisa and Fia.

In the category of Best Stylized Character Animation, Loic Bramoulle won for the piece HellGal.

Both winners will receive an NVIDIA Studio-validated laptop to help further their creative efforts.

View over 250 imaginative and impressive entries here.

Follow NVIDIA Studio on Instagram, X and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. 

Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter.

]]>
Spooks Await at the ‘Haunted Sanctuary,’ Built With RTX and AI https://blogs.nvidia.com/blog/ai-studio-comfyui-adobe-firefly-photoshop/ Wed, 30 Oct 2024 13:00:08 +0000 https://blogs.nvidia.com/?p=75064

Among the artists using AI to enhance and accelerate their creative endeavors is Sabour Amirazodi, a creator and tech marketing and workflow specialist at NVIDIA.

Using his over 20 years of multi-platform experience in location-based entertainment and media production, he decorates his home every year with an incredible Halloween installation — dubbed the Haunted Sanctuary.

The project is a massive undertaking requiring projection mapping, the creation and assembly of 3D scenes, compositing and editing in Adobe After Effects and Premiere Pro, and more. The creation process was accelerated using the NVIDIA Studio content creation platform and Amirazodi’s NVIDIA RTX 6000 GPU.

This year, Amirazodi deployed new AI workflows in ComfyUI, Adobe Firefly and Photoshop to create digital portraits — inspired by his family — as part of the installation.

Give ’em Pumpkin to Talk About

ComfyUI is a node-based interface that generates images and videos from text. It’s designed to be highly customizable, allowing users to design workflows, adjust settings and see results immediately. It can combine various AI models and third-party extensions to achieve a higher degree of control.

For example, this workflow below requires entering a prompt, the details and characteristics of the desired image, and a negative prompt to help omit any undesired visual effects.

Since Amirazodi wanted his digital creations to closely resemble his family, he started by applying Run IP Adapters, which use reference images to inform generated content.

ComfyUI nodes and reference material in the viewer.

From there, he tinkered with the settings to achieve the desired look and feel of each character.

The Amirazodis digitized for the ‘Halloween Sanctuary’ installation.

ComfyUI has NVIDIA TensorRT acceleration, so RTX users can generate images from prompts up to 60% faster.

Get started with ComfyUI.

In Darkness, Let There Be Light

Adobe Firefly is a family of creative generative AI models that offer new ways to ideate and create while assisting creative workflows. They’re designed to be safe for commercial use and were trained, using NVIDIA GPUs, on licensed content like Adobe Stock Images and public domain content where copyright has expired.

To make the digital portraits fit as desired, Amirazodi needed to expand the background.

Adobe Photoshop features a Generative Fill tool called Generative Expand that allows artists to extend the border of their image with the Crop tool and automatically fill the space with content that matches the existing image.

Photoshop also features “Neural Filters that allow artists to explore creative ideas and make complex adjustments to images in just seconds, saving them hours of tedious, manual work.

With Smart Portrait Neural Filters, artists can easily experiment with facial characteristics such as gaze direction and lighting angles simply by dragging a slider. Amirazodi used the feature to apply the final touches to his portraits, adjusting colors, textures, depth blur and facial expressions.

NVIDIA RTX GPUs help power AI-based tasks, accelerating the Neural Filters in Photoshop.

Learn more about the latest Adobe features and tools in this blog.

AI is already helping accelerate and automate tasks across content creation, gaming and everyday life — and the speedups are only multiplied with an NVIDIA RTX- or GeForce RTX GPU-equipped system.

Check out and share Halloween- and fall-themed art as a part of the NVIDIA Studio #HarvestofCreativity challenge on Instagram, X, Facebook and Threads for a chance to be featured on the social media channels.

]]>
MAXimum AI: RTX-Accelerated Adobe AI-Powered Features Speed Up Content Creation https://blogs.nvidia.com/blog/studio-adobe-max-firefly-ai-substance-3d-viewer/ Mon, 14 Oct 2024 14:00:27 +0000 https://blogs.nvidia.com/?p=74581

At the Adobe MAX creativity conference this week, Adobe announced updates to its Adobe Creative Cloud products, including Premiere Pro and After Effects, as well as to Substance 3D products and the Adobe video ecosystem.

These apps are accelerated by NVIDIA RTX and GeForce RTX GPUs — in the cloud or running locally on RTX AI PCs and workstations.

One of the most highly anticipated features is Generative Extend in Premiere Pro (beta), which uses generative AI to seamlessly add frames to the beginning or end of a clip. Powered by the Firefly Video Model, it’s designed to be commercially safe and only trained on content Adobe has permission to use, so artists can create with confidence.

Adobe Substance 3D Collection apps offer numerous RTX-accelerated features for 3D content creation, including ray tracing, AI delighting and upscaling, and image-to-material workflows powered by Adobe Firefly.

Substance 3D Viewer, entering open beta at Adobe MAX, is designed to unlock 3D in 2D design workflows by allowing 3D files to be opened, viewed and used across design teams. This will improve interoperability with other RTX-accelerated Adobe apps like Photoshop.

Adobe Firefly integrations have also been added to Substance 3D Collection apps, including Text to Texture, Text to Pattern and Image to Texture tools in Substance 3D Sampler, as well as Generative Background in Substance 3D Stager, to further enhance the 3D content creation with generative AI.

The October NVIDIA Studio Driver, designed to optimize creative apps, will be available for download tomorrow. For automatic Studio Driver notifications, as well as easy access to apps like NVIDIA Broadcast, download the NVIDIA app beta.

Video Editing Evolved

Adobe Premiere Pro has transformed video editing workflows over the last four years with features like Auto Reframe and Scene Edit Detection.

The recently launched GPU-accelerated Enhance Speech, AI Audio Category Tagging and Filler Word Detection features allow editors to use AI to intelligently cut and modify video scenes.

The Adobe Firefly Video Model — now available in limited beta at Firefly.Adobe.com — brings generative AI to video, marking the next advancement in video editing. It allows users to create and edit video clips using simple text prompts or images, helping fill in content gaps without having to reshoot, extend or reframe takes. It can also be used to create video clip prototypes as inspiration for future shots.

Topaz Labs has introduced a new plug-in for Adobe After Effects, a video enhancement software that uses AI models to improve video quality. This gives users access to enhancement and motion deblur models for sharper, clearer video quality. Accelerated on GeForce RTX GPUs, these models run nearly 2.5x faster on the GeForce RTX 4090 Laptop GPU compared with the MacBook Pro M3 Max.

Stay tuned for NVIDIA TensorRT enhancements and more Topaz Video AI effects coming to the After Effects plug-in soon.

3D Super Powered

The Substance 3D Collection is revolutionizing the ideation stage of 3D creation with powerful generative AI features in Substance 3D Sampler and Stager.

Sampler’s Text to Texture, Text to Pattern and Image to Texture tools, powered by Adobe Firefly, allow artists to rapidly generate reference images from simple prompts that can be used to create parametric materials.

Stager’s Generative Background feature helps designers explore backgrounds for staging 3D models, using text descriptions to generate images. Stager can then match lighting and camera perspective, allowing designers to explore more variations faster when iterating and mocking up concepts.

Substance 3D Viewer also offers a connected workflow with Photoshop, where 3D models can be placed into Photoshop projects and edits made to the model in Viewer will be automatically sent back to the Photoshop project. GeForce RTX GPU hardware acceleration and ray tracing provide smooth movement in the viewport, producing up to 80% higher frames per second on the GeForce RTX 4060 Laptop GPU compared to the MacBook M3 Pro.

There are also new Firefly-powered features in Substance 3D Viewer, like Text to 3D and 3D Model to Image, that combine text prompts and 3D objects to give artists more control when generating new scenes and variations.

The latest After Effects release features an expanded range of 3D tools that enable creators to embed 3D animations, cast ultra-realistic shadows on 2D objects and isolate effects in 3D space.

After Effects now also has an RTX GPU-powered Advanced 3D Renderer that accelerates the processing-intensive and time-consuming task of applying HDRI lighting — lowering creative barriers to entry while improving content realism. Rendering can be done 30% faster on a GeForce RTX 4090 GPU over the previous generation.

Pairing Substance 3D with After Effects native and fast 3D integration allows artists to significantly boost the visual quality of 3D in After Effects with precision texturing and access to more than 20,000 parametric 3D materials, IBL environment lights and 3D models.

Follow NVIDIA Studio on Instagram, X and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. 

Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter.

]]>
Upgrade Livestreams With Twitch Enhanced Broadcasting and NVIDIA Encoder https://blogs.nvidia.com/blog/studio-twitch-enhanced-broadcasting-hevc-driver/ Tue, 17 Sep 2024 13:00:43 +0000 https://blogs.nvidia.com/?p=74098

At TwitchCon — a global convention for the Twitch livestreaming platform—livestreamers and content creators this week can experience the latest technologies for accelerating creative workflows and improving video quality.

That includes the closed beta release of Twitch Enhanced Broadcasting support for HEVC when using the NVIDIA encoder.

Content creators can also use the NVIDIA Broadcast app, eighth-generation NVIDIA NVENC and RTX-powered optimizations in streaming and video editing apps to enhance their productions.

Plus, the September NVIDIA Studio Driver, designed to optimize creative apps, is now ready for download. Studio Drivers undergo extensive testing to ensure seamless compatibility while enhancing features, automating processes and accelerating workflows.

Twitch Enhanced Broadcasting With HEVC

The tradeoff between higher-resolution video quality and reliable streaming is a common issue livestreamers struggle with.

Higher-quality video provides more enjoyable viewing experiences but can cause streams to buffer for viewers with lower bandwidth or older devices. Streaming lower-bitrate video allows more people to watch content seamlessly but introduces artifacts that can interfere with viewing quality.

To address this issue, NVIDIA and Twitch collaborated to develop Twitch Enhanced Broadcasting. The feature adds the capability to send multiple streams — different versions of encoded video with different resolutions or bitrates — directly from NVIDIA GeForce RTX-equipped PCs or NVIDIA RTX workstations to deliver the highest-quality video a viewer’s internet connection can handle.

Twitch supports HEVC (H.265) in the Enhanced Broadcasting closed beta. With the NVIDIA encoder, Twitch streamers get 25% improved efficiency and quality over H.264.

This means that video will look as if it were being streamed with 25% more bitrate — in higher quality and with reduced artifacts or encoding errors. The feature is ideal for streaming fast-paced gameplay, enabling cleaner, sharper video with minimal lag.

Because all stream versions are generated with a dedicated hardware encoder on GeForce RTX GPUs, the rest of the system’s GPU and CPU are free to focus on running games more smoothly to maximize performance.

Learn how to get started on twitch.com.

AI-Enhanced Microphones and Webcams

Streaming is easier than ever with NVIDIA technologies.

For starters, PC performance and video quality are incredibly high quality thanks to NVIDIA’s dedicated encoder. And, NVIDIA GPUs include Tensor Cores that efficiently run AI.

Livestreamers can use AI to enhance their hardware peripherals and devices, which is especially helpful for those who haven’t had the time or resources to assemble extensive audio and video setups.

NVIDIA Broadcast transforms any home office or dorm room into a home studio — without the need to purchase specialized equipment. Its AI-powered features include Noise and Echo Removal for microphones, and Virtual Background, Auto Frame, Video Noise Removal and Eye Contact for cameras.

Livestreamers can download the Broadcast app or access its effects across popular creative apps, including Corsair iCUE, Elgato Camera Hub, OBS, Streamlabs, VTube Studio and Wave Link.

Spotlight the Highlights

GeForce RTX GPUs make it lightning-fast to edit and enhance video footage on the most popular video editing apps, from Adobe Premiere Pro to CapCut Pro.

Streamers can use AI-powered, RTX-accelerated features like Enhance Speech to remove noise and improve the quality of dialogue clips; Auto Reframe to automatically size social media videos; and Scene Edit Detection to break up long videos, like B-roll stringouts, into individual clips.

NVIDIA encoders help turbocharge the export process. For those looking for extreme performance, the GeForce RTX 4070 Ti GPU and up come equipped with dual encoders that can be used in parallel to halve export times on apps like CapCut, the most widely used video editing app on TikTok.

Clearer, Sharper Viewing Experiences With RTX Video

NVIDIA RTX Video — available exclusively for NVIDIA and GeForce RTX GPU owners — can turn any online and native video into pristine 4K high dynamic range (HDR) content with two technologies: Video Super Resolution and Video HDR.

RTX Video Super Resolution de-artifacts and upscales streamed video to remove errors that occur during encoding or transport, then runs an AI super-resolution effect. The result is cleaner, sharper video that’s ideal for streaming on platforms like YouTube and Twitch. RTX Video is available in popular web browsers including Opera, Google Chrome, Mozilla Firefox and Microsoft Edge.

Many users have HDR displays, but there isn’t much HDR content online. RTX Video HDR addresses this by turning any standard dynamic range (SDR) video into HDR10 quality that delivers a wider range of brights and darks and makes visuals more vibrant and colorful. This feature is especially helpful when watching dark-lit scenes in video games.

RTX Video HDR requires an RTX GPU connected to an HDR10-compatible monitor or TV. For more information, see the RTX Video FAQ.

Check out TwitchCon — taking place in San Diego and online from Sept. 20-22 for the latest streaming updates. 

]]>
Problem Solved: STEM Studies Supercharged With RTX and AI Technologies https://blogs.nvidia.com/blog/ai-decoded-stem/ Wed, 07 Aug 2024 13:00:00 +0000 https://blogs.nvidia.com/?p=73532

Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for RTX PC users.

AI powered by NVIDIA GPUs is accelerating nearly every industry, creating high demand for graduates, especially from STEM fields, who are proficient in using the technology. Millions of students worldwide are participating in university STEM programs to learn skills that will set them up for career success.

To prepare students for the future job market, NVIDIA has worked with top universities to develop a GPU-accelerated AI curriculum that’s now taught in more than 5,000 schools globally. Students can get a jumpstart outside of class with NVIDIA’s AI Learning Essentials, a set of resources that equips individuals with the necessary knowledge, skills and certifications for the rapidly evolving AI workforce.

NVIDIA GPUs — whether running in university data centers, GeForce RTX laptops or NVIDIA RTX workstations — are accelerating studies, helping enhance the learning experience and enabling students to gain hands-on experience with hardware used widely in real-world applications.

Supercharged AI Studies

NVIDIA provides several tools to help students accelerate their studies.

The RTX AI Toolkit is a powerful resource for students looking to develop and customize AI models for projects in computer science, data science, and other STEM fields. It allows students to train and fine-tune the latest generative AI models, including Gemma, Llama 3 and Phi 3, up to 30x faster — enabling them to iterate and innovate more efficiently, advancing their studies and research projects.

Students studying data science and economics can use NVIDIA RAPIDS AI and data science software libraries to run traditional machine learning models up to 25x faster than conventional methods, helping them handle large datasets more efficiently, perform complex analyses in record time and gain deeper insights from data.

AI-deal for Robotics, Architecture and Design

Students studying robotics can tap the NVIDIA Isaac platform for developing, testing and deploying AI-powered robotics applications. Powered by NVIDIA GPUs, the platform consists of NVIDIA-accelerated libraries, applications frameworks and AI models that supercharge the development of AI-powered robots like autonomous mobile robots, arms and manipulators, and humanoids.

While GPUs have long been used for 3D design, modeling and simulation, their role has significantly expanded with the advancement of AI. GPUs are today used to run AI models that dramatically accelerate rendering processes.

Some industry-standard design tools powered by NVIDIA GPUs and AI include:

  • SOLIDWORKS Visualize: This 3D computer-aided design rendering software uses NVIDIA Optix AI-powered denoising to produce high-quality ray-traced visuals, streamlining the design process by providing faster, more accurate visual feedback.
  • Blender: This popular 3D creation suite uses NVIDIA Optix AI-powered denoising to deliver stunning ray-traced visuals, significantly accelerating content creation workflows.
  • D5 Render: Commonly used by architects, interior designers and engineers, D5 Render incorporates NVIDIA DLSS technology for real-time viewport rendering, enabling smoother, more detailed visualizations without sacrificing performance. Powered by fourth-generation Tensor Cores and the NVIDIA Optical Flow Accelerator on GeForce RTX 40 Series GPUs and NVIDIA RTX Ada Generation GPUs, DLSS uses AI to create additional frames and improve image quality.
  • Enscape: Enscape makes it possible to ray trace more geometry at a higher resolution, at exactly the same frame rate. It uses DLSS to enhance real-time rendering capabilities, providing architects and designers with seamless, high-fidelity visual previews of their projects.

Beyond STEM

Students, hobbyists and aspiring artists use the NVIDIA Studio platform to supercharge their creative processes with RTX and AI. RTX GPUs power creative apps such as Adobe Creative Cloud, Autodesk, Unity and more, accelerating a variety of processes such as exporting videos and rendering art.

ChatRTX is a demo app that lets students create a personalized GPT large language model connected to their own content and study materials, including text, images or other data. Powered by advanced AI, ChatRTX functions like a personalized chatbot that can quickly provide students relevant answers to questions based on their connected content. The app runs locally on a Windows RTX PC or workstation, meaning students can get fast, secure results personalized to their needs.

NVIDIA ChatRTX user interface.

Schools are increasingly adopting remote learning as a teaching modality. NVIDIA Broadcast — a free application that delivers professional-level audio and video with AI-powered features on RTX PCs and workstations — integrates seamlessly with remote learning applications including BlueJeans, Discord, Google Meet, Microsoft Teams, Webex and Zoom. It uses AI to enhance remote learning experiences by removing background noise, improving image quality in low-light scenarios, and enabling background blur and background replacement.

NVIDIA Broadcast.

From Data Centers to School Laptops

NVIDIA RTX-powered mobile workstations and GeForce RTX and Studio RTX 40 Series laptops offer supercharged development, learning, gaming and creating experiences with AI-enabled tools and apps. They also include exclusive access to the NVIDIA Studio platform of creative tools and technologies, and Max-Q technologies that optimize battery life and acoustics — giving students an ideal platform for all aspects of campus life.

Say goodbye to late nights in the computer lab — GeForce RTX laptops and NVIDIA RTX workstations share the same architecture as the NVIDIA GPUs powering many university labs and data centers. That means students can study, create and play — all on the same PC.

STEM Application Performance for GeForce RTX 4060 Laptop GPU versus Laptop without GeForce RTX GPU.

Learn more about GeForce RTX laptops and NVIDIA RTX workstations.

]]>
Editor’s Paradise: NVIDIA RTX-Powered Video Software CyberLink PowerDirector Gains High-Efficiency Video Coding Upgrades https://blogs.nvidia.com/blog/studio-hevc-rtx-ai-august-driver/ Tue, 06 Aug 2024 14:45:48 +0000 https://blogs.nvidia.com/?p=73484

Editor’s note: This post is part of our In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX GPU features, technologies and resources, and how they dramatically accelerate content creation.

Every month brings new creative app updates and optimizations powered by the NVIDIA Studio platform — supercharging creative processes with NVIDIA RTX and AI.

RTX-powered video editing app CyberLink PowerDirector now has a setting for high-efficiency video encoding (HEVC). 3D artists can access new features and faster workflows in Adobe Substance 3D Modeler and SideFX: Houdini. And content creators using Topaz Video AI Pro can now scale their photo and video touchups faster with NVIDIA TensorRT acceleration.

The August Studio Driver is ready to install via the NVIDIA app beta — the essential companion for creators and gamers — to keep GeForce RTX PCs up to date with the latest NVIDIA drivers and technology.

And this week’s featured In the NVIDIA Studio artist Stavros Liaskos is creating physically accurate 3D digital replicas of Greek Orthodox churches, holy temples, monasteries and other buildings using the NVIDIA Omniverse platform for building and connecting Universal Scene Description (OpenUSD) apps.

Discover the latest breakthroughs in graphics and generative AI by watching the replay of NVIDIA founder and CEO Jensen Huang’s firechat chats with Lauren Goode, senior writer at WIRED, and Meta founder and CEO Mark Zuckerberg at SIGGRAPH. 

There’s a Creative App for That

The NVIDIA NVENC video encoder is built into every RTX graphics card, offloading the compute-intensive task of video encoding from the CPU to a dedicated part of the GPU.

CyberLink PowerDirector, a popular video editing program that recently added support for RTX Video HDR, now has a setting to increase HEVC with NVIDIA NVENC HEVC Ultra-High-Quality mode.

The new functionality reduces bit rates and improves encoding efficiency by 10%, significantly boosting video quality. Using the custom setting, content creators can offer audiences superior viewing experiences.

Encoding efficiency jumps by 55% with just a few clicks.

Alpha exporting allows users to add overlay effects to videos by exporting HEVC video with an alpha channel. This technique can be used to create transparent backgrounds and rapidly process animated overlays, making it ideal for creating social media content.

With an alpha channel, users can export HEVC videos up to 8x faster compared with run-length encoding supported by other processors, and with a 100x reduction in file size.

Adobe Substance 3D Modeler, a multisurface 3D sculpting tool for artists, virtual effects specialists and designers, released Block to Stock, an AI-powered, geometry-based feature for accelerating the prototyping of complex shapes.

It allows rough 3D shapes to be quickly replaced with pre-existing, similarly shaped 3D models that have greater detail. The result is a highly detailed shape crafted in no time.

The recently released version 20.5 of SideFX: Houdini, a 3D procedural software for modeling, animation and lighting, introduced NVIDIA OptiX 8 and NVIDIA’s Shader Execution Reordering feature to its Karma XPU renderer — exclusively on NVIDIA RTX GPUs.

With these additions, computationally intensive tasks can now be executed up to 4x faster on RTX GPUs.

Topaz Video AI Pro, a photo and video enhancement software for noise reduction, sharpening and upscaling, added TensorRT acceleration for multi-GPU configurations, enabling parallelization across multiple GPUs for supercharged rendering speeds — up to 2x faster with two GPUs over a single GPU system, with further acceleration in systems with additional GPUs.

Virtual Cultural Sites to G(r)eek Out About

Anyone can now explore over 30 Greek cultural sites in virtual reality, thanks to the immersive work of Stavros Liaskos, managing director of visual communications company Reyelise.

“Many historical and religious sites are at risk due to environmental conditions, neglect and socio-political issues,” he said. “By creating detailed 3D replicas, we’re helping to ensure their architectural splendor is preserved digitally for future generations.”

Liaskos dedicated the project to his father, who passed away last year.

“He taught me the value of patience and instilled in me the belief that nothing is unattainable,” he said. “His wisdom and guidance continue to inspire me every day.”

Churches are architecturally complex structures. To create physically accurate 3D models of them, Liaskos used the advanced real-time rendering capabilities of Omniverse, connected with a slew of content-creation apps.

The OpenUSD framework enabled a seamless workflow across the various apps Liaskos used. For example, after using Trimble X7 for highly accurate 3D scanning of structures, Liaskos easily moved to Autodesk 3ds Max and Blender for modeling and animation.

Then, with ZBrush, he sculpted intricate architectural details on the models and refined textures with Adobe Photoshop and Substance 3D. It was all brought together in Omniverse for real-time lighting and rendering.

Interior rendering of the Panagia Xrysospiliotissa Church in Athens, Greece.

For post-production work, like adding visual effects and compiling rendered scenes, Liaskos used OpenUSD to transfer his projects to Adobe After Effects, where he finalized the video output. Nearly every element of his creative workflow was accelerated by his NVIDIA RTX A4500 GPU. 

Interior scene of the Church of Saint Basil on Metsovou Street in Athens.

Liaskos also explored developing extended reality (XR) applications that allow users to navigate his 3D projects in real time in virtual reality (VR).

 

First, he used laser scanning and photogrammetry to capture the detailed geometries and textures of the churches.

 

Then, he tapped Autodesk 3ds Max and Maxon ZBrush for retopology, ensuring the models were optimized for real-time rendering without compromising detail.

After importing them into NVIDIA Omniverse with OpenUSD, Liaskos packaged the XR scenes so they could be streamed to VR headsets  using either the NVIDIA Omniverse Create XR spatial computing app or Unity Engine, enabling immersive viewing experiences.

“This approach will even more strikingly showcase the architectural beauty and cultural significance of these sites,” Liaskos said. “The simulation must be as good as possible to recreate the overwhelming, impactful feeling of calm and safety that comes with visiting a deeply spiritual space.”

Creator Stavros Liaskos.

The project is co-funded by the European Union within the framework of the operational program Digital Transformation 2021-2027 for the Greek Holy Archbishopric of Athens.

Follow NVIDIA Studio on Instagram, X and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter

Stay up to date on NVIDIA Omniverse with Instagram, Medium and X. For more, join the Omniverse community and check out the Omniverse forums, Discord server, Twitch and YouTube channels. 

]]>
Taking AI to Warp Speed: Decoding How NVIDIA’s Latest RTX-Powered Tools and Apps Help Developers Accelerate AI on PCs and Workstations https://blogs.nvidia.com/blog/ai-decoded-siggraph-chat-rtx-update/ Wed, 31 Jul 2024 13:00:24 +0000 https://blogs.nvidia.com/?p=73418

Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for RTX PC users.

NVIDIA is spotlighting the latest NVIDIA RTX-powered tools and apps at SIGGRAPH, an annual trade show at the intersection of graphics and AI.

These AI technologies provide advanced ray-tracing and rendering techniques, enabling highly realistic graphics and immersive experiences in gaming, virtual reality, animation and cinematic special effects. RTX AI PCs and workstations are helping drive the future of interactive digital media, content creation, productivity and development.

ACE’s AI Magic

During a SIGGRAPH fireside chat, NVIDIA founder and CEO Jensen Huang introduced “James” — an interactive digital human built on NVIDIA NIM microservices — that showcases the potential of AI-driven customer interactions.

Using NVIDIA ACE technology and based on a customer-service workflow, James is a virtual assistant that can connect with people using emotions, humor and contextually accurate responses. Soon, users will be able to interact with James in real time at ai.nvidia.com.

James is a virtual assistant in NVIDIA ACE.

NVIDIA also introduced the latest advancements in the NVIDIA Maxine AI platform for telepresence, as well as companies adopting NVIDIA ACE, a suite of technologies for bringing digital humans to life with generative AI. These technologies enable digital human development with AI models for speech and translation, vision, intelligence, realistic animation and behavior, and lifelike appearance.

Maxine features two AI technologies that enhance the digital human experience in telepresence scenarios: Maxine 3D and Audio2Face-2D.

Developers can harness Maxine and ACE technologies to drive more engaging and natural interactions for people using digital interfaces across customer service, gaming and other interactive experiences.

Tapping advanced AI, NVIDIA ACE technologies allow developers to design avatars that can respond to users in real time with lifelike animations, speech and emotions. RTX GPUs provide the necessary computational power and graphical fidelity to render ACE avatars with stunning detail and fluidity.

With ongoing advancements and increasing adoption, ACE is setting new benchmarks for building virtual worlds and sparking innovation across industries. Developers tapping into the power of ACE with RTX GPUs can build more immersive applications and advanced, AI-based, interactive digital media experiences.

RTX Updates Unleash AI-rtistry for Creators

NVIDIA GeForce RTX PCs and NVIDIA RTX workstations are getting an upgrade with GPU accelerations that provide users with enhanced AI content-creation experiences.

For video editors, RTX Video HDR is now available through Wondershare Filmora and DaVinci Resolve. With this technology, users can transform any content into high dynamic range video with richer colors and greater detail in light and dark scenes — making it ideal for gaming videos, travel vlogs or event filmmaking. Combining RTX Video HDR with RTX Video Super Resolution further improves visual quality by removing encoding artifacts and enhancing details.

RTX Video HDR requires an RTX GPU connected to an HDR10-compatible monitor or TV. Users with an RTX GPU-powered PC can send files to the Filmora desktop app and continue to edit with local RTX acceleration, doubling the speed of the export process with dual encoders on GeForce RTX 4070 Ti or above GPUs. Popular media player VLC in June added support for RTX Video Super Resolution and RTX Video HDR, adding AI-enhanced video playback.

Read this blog on RTX-powered video editing and the RTX Video FAQ for more information. Learn more about Wondershare Filmora’s AI-powered features.

In addition, 3D artists are gaining more AI applications and tools that simplify and enhance workflows, including Replikant, Adobe, Topaz and Getty Images.

Replikant, an AI-assisted 3D animation platform, is integrating NVIDIA Audio2Face, an ACE technology, to enable improved lip sync and facial animation. By taking advantage of NVIDIA-accelerated generative models, users can enjoy real-time visuals enhanced by RTX and NVIDIA DLSS technology. Replikant is now available on Steam.

Adobe Substance 3D Modeler has added Search Asset Library by Shape, an AI-powered feature designed to streamline the replacement and enhancement of complex shapes using existing 3D models. This new capability significantly accelerates prototyping and enhances design workflows.

New AI features in Adobe Substance 3D integrate advanced generative AI capabilities, enhancing its texturing and material-creation tools. Adobe has launched the first integration of its Firefly generative AI capabilities into Substance 3D Sampler and Stager, making 3D workflows more seamless and productive for industrial designers, game developers and visual effects professionals.

For tasks like text-to-texture generation and prompt descriptions, Substance 3D users can generate photorealistic or stylized textures. These textures can then be applied directly to 3D models. The new Text to Texture and Generative Background features significantly accelerate traditionally time-consuming and intricate 3D texturing and staging tasks.

Powered by NVIDIA RTX Tensor Cores, Substance 3D can significantly accelerate computations and allows for more intuitive and creative design processes. This development builds on Adobe’s innovation with Firefly-powered Creative Cloud upgrades in Substance 3D workflows.

Topaz AI has added NVIDIA TensorRT acceleration for multi-GPU workflows, enabling parallelization across multiple GPUs for supercharged rendering speeds — up to 2x faster with two GPUs over a single GPU system, and scaling further with additional GPUs.

Getty Images has updated its Generative AI by iStock service with new features to enhance image generation and quality. Powered by NVIDIA Edify models, the latest enhancement delivers generation speeds set to reach around six seconds for four images, doubling the performance of the previous model, with speeds at the forefront of the industry. The improved Text-2-Image and Image-2-Image functionalities provide higher-quality results and greater adherence to user prompts.

Generative AI by iStock users can now also designate camera settings such as focal length (narrow, standard or wide) and depth of field (near or far). Improvements to generative AI super-resolution enhances image quality by using AI to create new pixels, significantly improving resolution without over-sharpening the image.

LLM-azing AI

ChatRTX — a tech demo that connects a large language model (LLM), like Meta’s Llama, to a user’s data for quickly querying notes, documents or images — is getting a user interface (UI) makeover, offering a cleaner, more polished experience.

ChatRTX also serves as an open-source reference project that shows developers how to build powerful, local, retrieval-augmented applications (RAG) applications accelerated by RTX.

ChatRTX is getting a interface (UI) makeover.

The latest version of ChatRTX, released today, uses the Electron + Material UI framework, which lets developers more easily add their own UI elements or extend the technology’s functionality. The update also includes a new architecture that simplifies the integration of different UIs and streamlines the building of new chat and RAG applications on top of the ChatRTX backend application programming interface.

End users can download the latest version of ChatRTX from the ChatRTX web page. Developers can find the source code for the new release on the ChatRTX GitHub repository.

Meta Llama 3.1-8B models are now optimized for inference on NVIDIA GeForce RTX PCs and NVIDIA RTX workstations. These models are natively supported with NVIDIA TensorRT-LLM, open-source software that accelerates LLM inference performance.

Dell’s AI Chatbots: Harnessing RTX Rocket Fuel

Dell is presenting how enterprises can boost AI development with an optimized RAG chatbot using NVIDIA AI Workbench and an NVIDIA NIM microservice for Llama 3. Using the NVIDIA AI Workbench Hybrid RAG Project, Dell is demonstrating how the chatbot can be used to converse with enterprise data that’s embedded in a local vector database, with inference running in one of three ways:

  • Locally on a Hugging Face TGI server
  • In the cloud using NVIDIA inference endpoints
  • On self-hosted NVIDIA NIM microservices

Learn more about the AI Workbench Hybrid RAG Project. SIGGRAPH attendees can experience this technology firsthand at Dell Technologies’ booth 301.

HP AI Studio: Innovate Faster With CUDA-X and Galileo

At SIGGRAPH, HP is presenting the Z by HP AI Studio, a centralized data science platform. Announced in October 2023, AI Studio has now been enhanced with the latest NVIDIA CUDA-X libraries as well as HP’s recent partnership with Galileo, a generative AI trust-layer company. Key benefits include:

  • Deploy projects faster: Configure, connect and share local and remote projects quickly.
  • Collaborate with ease: Access and share data, templates and experiments effortlessly.
  • Work your way: Choose where to work on your data, easily switching between online and offline modes.

Designed to enhance productivity and streamline AI development, AI Studio allows data science teams to focus on innovation. Visit HP’s booth 501 to see how AI Studio with RAPIDS cuDF can boost data preprocessing to accelerate AI pipelines. Apply for early access to AI Studio.

An RTX Speed Surge for Stable Diffusion

Stable Diffusion 3.0, the latest model from Stability AI, has been optimized with TensorRT to provide a 60% speedup.

A NIM microservice for Stable Diffusion 3 with optimized performance is available for preview on ai.nvidia.com.

There’s still time to join NVIDIA at SIGGRAPH to see how RTX AI is transforming the future of content creation and visual media experiences. The conference runs through Aug. 1.

Generative AI is transforming graphics and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter.

]]>
Next-Gen Video Editing: Wondershare Filmora Adds NVIDIA RTX Video HDR Support, RTX-Accelerated AI Features https://blogs.nvidia.com/blog/studio-wondershare-filmora-rtx-ai-july-driver/ Tue, 16 Jul 2024 13:00:19 +0000 https://blogs.nvidia.com/?p=72952

Editor’s note: This post is part of our In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX GPU features, technologies and resources, and how they dramatically accelerate content creation.

Wondershare Filmora — a video editing app with AI-powered tools — now supports NVIDIA RTX Video HDR, joining editing software like Blackmagic Design’s DaVinci Resolve and Cyberlink PowerDirector.

RTX Video HDR significantly enhances video quality, ensuring the final output is suitable for the best monitors available today.

Livestreaming software OBS Studio and XSplit Broadcaster now support Twitch Enhanced Broadcasting, giving streamers more control over video quality through client-side encoding and automatic configurations. The feature, developed in collaboration between Twitch, OBS and NVIDIA, also paves the way for more advancements, including vertical live video and advanced codecs such as HEVC and AV1.

A summer’s worth of creative app updates are included in the July Studio Driver, ready for download today. Install the NVIDIA app beta — the essential companion for creators and gamers — to keep GeForce RTX PCs up to date with the latest NVIDIA drivers and technology.

Join NVIDIA at SIGGRAPH to learn about the latest breakthroughs in graphics and generative AI, and tune in to a fireside chat featuring NVIDIA founder and CEO Jensen Huang and Lauren Goode, senior writer at WIRED, on Monday, July 29 at 2:30 p.m. MT. Register now.

And this week’s featured In the NVIDIA Studio artist, Kevin Stratvert, shares all about AI-powered content creation in Wondershare Filmora.

(Wonder)share the Beauty of RTX Video

RTX Video HDR analyzes standard dynamic range video and transforms it into HDR10-quality video, expanding the color gamut to produce clearer, more vibrant frames and enhancing the sense of depth for greater immersion.

With RTX Video HDR, Filmora users can create high-quality content that’s ideal for gaming videos, travel vlogs or event filmmaking.

Combining RTX Video HDR with RTX Video Super Resolution — another AI-powered tool that uses trained models to sharpen edges, restore features and remove artifacts in video — further enhances visual quality. RTX Video HDR requires an NVIDIA RTX GPU connected to an HDR10-compatible monitor or TV. For more information, check out the RTX Video FAQ.

Those with a RTX GPU-powered PC can send files to the Filmora desktop app and continue to edit with local RTX acceleration, doubling the speed of the export process with dual encoders on GeForce RTX 4070 Ti or above GPUs.

Learn more about Wondershare Filmora’s AI-powered features.

Maximizing AI Features in Filmora

Kevin Stratvert has the heart of a teacher — he’s always loved to share his technical knowledge and tips with others.

One day, he thought, “Why not make a YouTube video to explain stuff directly to users?” His first big hit was a tutorial on how to get Microsoft Office for free through Office.com. The video garnered millions of views and tons of engagement — and he’s continued creating content ever since.

“The more content I created, the more questions and feedback I got from viewers, sparking this cycle of creativity and connection that I just couldn’t get enough of,” said Stratvert.

Explaining the benefits of AI has been an area of particular interest for Stratvert, especially as it relates to AI-powered features in Wondershare Filmora. In one YouTube video, Filmora Video Editor Tutorial for Beginners, he breaks down the AI effects video editors can use to accelerate their workflows.

Examples include:

  • Smart Edit: Edit footage-based transcripts generated automatically, including in multiple languages.
  • Smart Cutout: Remove unwanted objects or change the background in seconds.
  • Speech-to-Text: Automatically generate compelling descriptions, titles and captions.

“AI has become a crucial part of my creative toolkit, especially for refining details that really make a difference,” said Stratvert. “By handling these technical tasks, AI frees up my time to focus more on creating content, making the whole process smoother and more efficient.”

Stratvert has also been experimenting with NVIDIA ChatRTX, a technology that lets users interact with their local data, installing and configuring various AI models, effectively prompting AI for both text and image outputs using CLIP and more.

NVIDIA Broadcast has been instrumental in giving Stratvert a professional setup for web conferences and livestreams. The app’s features, including background noise removal and virtual background, help maintain a professional appearance on screen. It’s especially useful in home studio settings, where controlling variables in the environment can be challenging.

“NVIDIA Broadcast has been instrumental in professionalizing my setup for web conferences and livestreams.” — Kevin Stratvert

Stratvert stresses the importance of his GeForce RTX 4070 graphics card in the content creation process.

“With an RTX GPU, I’ve noticed a dramatic improvement in render times and the smoothness of playback, even in demanding scenarios,” he said. “Additionally, the advanced capabilities of RTX GPUs support more intensive tasks like real-time ray tracing and AI-driven editing features, which can open up new creative possibilities in my edits.”

Check out Stratvert’s video tutorials on his website.

Content creator Kevin Stratvert.

Follow NVIDIA Studio on Instagram, X and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter

]]>
Jensen Huang, Mark Zuckerberg to Discuss Future of Graphics and Virtual Worlds at SIGGRAPH 2024 https://blogs.nvidia.com/blog/huang-zuckerberg-siggraph-2024/ Mon, 15 Jul 2024 17:14:39 +0000 https://blogs.nvidia.com/?p=72943 Read Article ]]>

NVIDIA founder and CEO Jensen Huang and Meta founder and CEO Mark Zuckerberg will hold a public fireside chat on Monday, July 29, at the 51st edition of the SIGGRAPH graphics conference in Denver.

The two leaders will discuss the future of AI and simulation and the pivotal role of research at SIGGRAPH, which focuses on the intersection of graphics and technology.

Before the discussion, Huang will also appear in a fireside chat with WIRED senior writer Lauren Goode to discuss AI and graphics for the new computing revolution.

Both conversations will be available live and on replay at NVIDIA.com.

The appearances at the conference, which runs July 28-Aug. 1, highlight SIGGRAPH’s continued role in technological innovation. Nearly 100 exhibitors will showcase how graphics are stepping into the future.

Attendees exploring the SIGGRAPH Innovation Zone will encounter startups at the forefront of computing and graphics while insights from industry leaders like Huang deliver a glimpse into the technological horizon.

Since the conference’s 1974 inception in Boulder, Colorado, SIGGRAPH has been at the forefront of innovation.

It introduced the world to demos such as the “Aspen Movie Map” — a precursor to Google Street View decades ahead of its time — and one of the first screenings of Pixar’s Luxo Jr., which redefined the art of animation.

The conference remains the leading venue for groundbreaking research in computer graphics.

Publications that redefined modern visual culture — including Ed Catmull’s 1974 paper on texture mapping, Turner Whitted’s 1980 paper on ray-tracing techniques, and James T. Kajiya’s 1986 “The Rendering Equation” — first made their debut at SIGGRAPH.

Innovations like these are now spilling out across the world’s industries.

Throughout the Innovation Zone, over a dozen startups are showcasing how they’re bringing advancements rooted in graphics into diverse fields — from robotics and manufacturing to autonomous vehicles and scientific research, including climate science.

Highlights include Tomorrow.io, which leverages NVIDIA Earth-2 to provide precise weather insights and offers early warning systems to help organizations adapt to climate changes.

Looking Glass is pioneering holographic technology that enables 3D content experiences without headsets. The company is using NVIDIA RTX 6000 Ada Generation GPUs and NVIDIA Maxine technology to enhance real-time audio, video and augmented-reality effects to make this possible.

Manufacturing startup nTop developed a computer-aided design tool using NVIDIA GPU-powered signed distance fields. The tool uses the NVIDIA OptiX rendering engine and a two-way NVIDIA Omniverse LiveLink connector to enable real-time, high-fidelity visualization and collaboration across design and simulation platforms.

Conference attendees can also explore how generative AI — a technology deeply rooted in visual computing — is remaking professional graphics.

On July 31, industry leaders and developers will gather in room 607 at the Colorado Convention Center for Generative AI Day, exploring cutting-edge solutions for visual effects, animation and game development with leaders from Bria AI, Cuebric, Getty Images, Replikant, Shutterstock and others.

The conference’s speaker lineup is equally compelling.

In addition to Huang and Zuckerberg, notable presenters include Dava Newman of MIT Media Lab and Mark Sagar from Soul Machines, who’ll delve into the intersections of bioengineering, design and digital humans.

Join the global technology community in Denver later this month to discover why SIGGRAPH remains at the forefront of demonstrating, predicting and shaping the future of technology.

]]>
Mile-High AI: NVIDIA Research to Present Advancements in Simulation and Gen AI at SIGGRAPH https://blogs.nvidia.com/blog/siggraph-2024-ai-graphics-research/ Fri, 12 Jul 2024 13:00:28 +0000 https://blogs.nvidia.com/?p=72914 Read Article ]]>

NVIDIA is taking an array of advancements in rendering, simulation and generative AI to SIGGRAPH 2024, the premier computer graphics conference, which will take place July 28 – Aug. 1 in Denver.

More than 20 papers from NVIDIA Research introduce innovations advancing synthetic data generators and inverse rendering tools that can help train next-generation models. NVIDIA’s AI research is making simulation better by boosting image quality and unlocking new ways to create 3D representations of real or imagined worlds.

The papers focus on diffusion models for visual generative AI, physics-based simulation and increasingly realistic AI-powered rendering. They include two technical Best Paper Award winners and collaborations with universities across the U.S., Canada, China, Israel and Japan as well as researchers at companies including Adobe and Roblox.

These initiatives will help create tools that developers and businesses can use to generate complex virtual objects, characters and environments. Synthetic data generation can then be harnessed to tell powerful visual stories, aid scientists’ understanding of natural phenomena or assist in simulation-based training of robots and autonomous vehicles.

Diffusion Models Improve Texture Painting, Text-to-Image Generation

Diffusion models, a popular tool for transforming text prompts into images, can help artists, designers and other creators rapidly generate visuals for storyboards or production, reducing the time it takes to bring ideas to life.

Two NVIDIA-authored papers are advancing the capabilities of these generative AI models.

ConsiStory, a collaboration between researchers at NVIDIA and Tel Aviv University, makes it easier to generate multiple images with a consistent main character — an essential capability for storytelling use cases such as illustrating a comic strip or developing a storyboard. The researchers’ approach introduces a technique called subject-driven shared attention, which reduces the time it takes to generate consistent imagery from 13 minutes to around 30 seconds.

Panels of multiple AI-generated images featuring the same character
ConsiStory is capable of generating a series of images featuring the same character.

NVIDIA researchers last year won the Best in Show award at SIGGRAPH’s Real-Time Live event for AI models that turn text or image prompts into custom textured materials. This year, they’re presenting a paper that applies 2D generative diffusion models to interactive texture painting on 3D meshes, enabling artists to paint in real time with complex textures based on any reference image.

Kick-Starting Developments in Physics-Based Simulation

Graphics researchers are narrowing the gap between physical objects and their virtual representations with physics-based simulation — a range of techniques to make digital objects and characters move the same way they would in the real world.

Several NVIDIA Research papers feature breakthroughs in the field, including SuperPADL, a project that tackles the challenge of simulating complex human motions based on text prompts (see video at top).

Using a combination of reinforcement learning and supervised learning, the researchers demonstrated how the SuperPADL framework can be trained to reproduce the motion of more than 5,000 skills — and can run in real time on a consumer-grade NVIDIA GPU.

Another NVIDIA paper features a neural physics method that applies AI to learn how objects — whether represented as a 3D mesh, a NeRF or a solid object generated by a text-to-3D model — would behave as they are moved in an environment.

 

A paper written in collaboration with Carnegie Mellon University researchers develops a new kind of renderer — one that, instead of modeling physical light, can perform thermal analysis, electrostatics and fluid mechanics. Named one of five best papers at SIGGRAPH, the method is easy to parallelize and doesn’t require cumbersome model cleanup, offering new opportunities for speeding up engineering design cycles.

In the example above, the renderer performs a thermal analysis of the Mars Curiosity rover, where keeping temperatures within a specific range is critical to mission success. 

Additional simulation papers introduce a more efficient technique for modeling hair strands and a pipeline that accelerates fluid simulation by 10x.

Raising the Bar for Rendering Realism, Diffraction Simulation

Another set of NVIDIA-authored papers present new techniques to model visible light up to 25x faster and simulate diffraction effects — such as those used in radar simulation for training self-driving cars — up to 1,000x faster.

A paper by NVIDIA and University of Waterloo researchers tackles free-space diffraction, an optical phenomenon where light spreads out or bends around the edges of objects. The team’s method can integrate with path-tracing workflows to increase the efficiency of simulating diffraction in complex scenes, offering up to 1,000x acceleration. Beyond rendering visible light, the model could also be used to simulate the longer wavelengths of radar, sound or radio waves.

Urban scene with colors showing simulation of cellular radiation propagation around buildings
Simulation of cellular signal coverage in a city.

Path tracing samples numerous paths — multi-bounce light rays traveling through a scene — to create a photorealistic picture. Two SIGGRAPH papers improve sampling quality for ReSTIR, a path-tracing algorithm first introduced by NVIDIA and Dartmouth College researchers at SIGGRAPH 2020 that has been key to bringing path tracing to games and other real-time rendering products.

One of these papers, a collaboration with the University of Utah, shares a new way to reuse calculated paths that increases effective sample count by up to 25x, significantly boosting image quality. The other improves sample quality by randomly mutating a subset of the light’s path. This helps denoising algorithms perform better, producing fewer visual artifacts in the final render.

Model of a sheep rendering with three different path-tracing techniques
From L to R: Compare the visual quality of previous sampling, the 25x improvement and a reference image. Model courtesy Blender Studio.

Teaching AI to Think in 3D

NVIDIA researchers are also showcasing multipurpose AI tools for 3D representations and design at SIGGRAPH.

One paper introduces fVDB, a GPU-optimized framework for 3D deep learning that matches the scale of the real world. The fVDB framework provides AI infrastructure for the large spatial scale and high resolution of city-scale 3D models and NeRFs, and segmentation and reconstruction of large-scale point clouds.

A Best Technical Paper award winner written in collaboration with Dartmouth College researchers introduces a theory for representing how 3D objects interact with light. The theory unifies a diverse spectrum of appearances into a single model.

And a collaboration with University of Tokyo, University of Toronto and Adobe Research introduces an algorithm that generates smooth, space-filling curves on 3D meshes in real time. While previous methods took hours, this framework runs in seconds and offers users a high degree of control over the output to enable interactive design.

NVIDIA at SIGGRAPH

Learn more about NVIDIA at SIGGRAPH. Special events include a fireside chat between NVIDIA founder and CEO Jensen Huang and Meta founder and CEO Mark Zuckerberg, as well as a fireside chat with Huang and Lauren Goode, senior writer at WIRED, on the impact of robotics and AI in industrial digitalization. 

NVIDIA researchers will also present OpenUSD Day by NVIDIA, a full-day event showcasing how developers and industry leaders are adopting and evolving OpenUSD to build AI-enabled 3D pipelines.

NVIDIA Research has hundreds of scientists and engineers worldwide, with teams focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics. See more of their latest work.

]]>
Mission NIMpossible: Decoding the Microservices That Accelerate Generative AI https://blogs.nvidia.com/blog/ai-decoded-nim/ Wed, 10 Jul 2024 13:00:08 +0000 https://blogs.nvidia.com/?p=72866

Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible and showcases new hardware, software, tools and accelerations for NVIDIA RTX PC and workstation users.

In the rapidly evolving world of artificial intelligence, generative AI is captivating imaginations and transforming industries. Behind the scenes, an unsung hero is making it all possible: microservices architecture.

The Building Blocks of Modern AI Applications

Microservices have emerged as a powerful architecture, fundamentally changing how people design, build and deploy software.

A microservices architecture breaks down an application into a collection of loosely coupled, independently deployable services. Each service is responsible for a specific capability and communicates with other services through well-defined application programming interfaces, or APIs. This modular approach stands in stark contrast to traditional all-in-one architectures, in which all functionality is bundled into a single, tightly integrated application.

By decoupling services, teams can work on different components simultaneously, accelerating development processes and allowing updates to be rolled out independently without affecting the entire application. Developers can focus on building and improving specific services, leading to better code quality and faster problem resolution. Such specialization allows developers to become experts in their particular domain.

Services can be scaled independently based on demand, optimizing resource utilization and improving overall system performance. In addition, different services can use different technologies, allowing developers to choose the best tools for each specific task.

A Perfect Match: Microservices and Generative AI

The microservices architecture is particularly well-suited for developing generative AI applications due to its scalability, enhanced modularity and flexibility.

AI models, especially large language models, require significant computational resources. Microservices allow for efficient scaling of these resource-intensive components without affecting the entire system.

Generative AI applications often involve multiple steps, such as data preprocessing, model inference and post-processing. Microservices enable each step to be developed, optimized and scaled independently. Plus, as AI models and techniques evolve rapidly, a microservices architecture allows for easier integration of new models as well as the replacement of existing ones without disrupting the entire application.

NVIDIA NIM: Simplifying Generative AI Deployment

As the demand for AI-powered applications grows, developers face challenges in efficiently deploying and managing AI models.

NVIDIA NIM inference microservices provide models as optimized containers to deploy in the cloud, data centers, workstations, desktops and laptops. Each NIM container includes the pretrained AI models and all the necessary runtime components, making it simple to integrate AI capabilities into applications.

NIM offers a game-changing approach for application developers looking to incorporate AI functionality by providing simplified integration, production-readiness and flexibility. Developers can focus on building their applications without worrying about the complexities of data preparation, model training or customization, as NIM inference microservices are optimized for performance, come with runtime optimizations and support industry-standard APIs.

AI at Your Fingertips: NVIDIA NIM on Workstations and PCs

Building enterprise generative AI applications comes with many challenges. While cloud-hosted model APIs can help developers get started, issues related to data privacy, security, model response latency, accuracy, API costs and scaling often hinder the path to production.

Workstations with NIM provide developers with secure access to a broad range of models and performance-optimized inference microservices.

By avoiding the latency, cost and compliance concerns associated with cloud-hosted APIs as well as the complexities of model deployment, developers can focus on application development. This accelerates the delivery of production-ready generative AI applications — enabling seamless, automatic scale out with performance optimization in data centers and the cloud.

The recently announced general availability of the Meta Llama 3 8B model as a NIM, which can run locally on RTX systems, brings state-of-the-art language model capabilities to individual developers, enabling local testing and experimentation without the need for cloud resources. With NIM running locally, developers can create sophisticated retrieval-augmented generation (RAG) projects right on their workstations.

Local RAG refers to implementing RAG systems entirely on local hardware, without relying on cloud-based services or external APIs.

Developers can use the Llama 3 8B NIM on workstations with one or more NVIDIA RTX 6000 Ada Generation GPUs or on NVIDIA RTX systems to build end-to-end RAG systems entirely on local hardware. This setup allows developers to tap the full power of Llama 3 8B, ensuring high performance and low latency.

By running the entire RAG pipeline locally, developers can maintain complete control over their data, ensuring privacy and security. This approach is particularly helpful for developers building applications that require real-time responses and high accuracy, such as customer-support chatbots, personalized content-generation tools and interactive virtual assistants.

Hybrid RAG combines local and cloud-based resources to optimize performance and flexibility in AI applications. With NVIDIA AI Workbench, developers can get started with the hybrid-RAG Workbench Project — an example application that can be used to run vector databases and embedding models locally while performing inference using NIM in the cloud or data center, offering a flexible approach to resource allocation.

This hybrid setup allows developers to balance the computational load between local and cloud resources, optimizing performance and cost. For example, the vector database and embedding models can be hosted on local workstations to ensure fast data retrieval and processing, while the more computationally intensive inference tasks can be offloaded to powerful cloud-based NIM inference microservices. This flexibility enables developers to scale their applications seamlessly, accommodating varying workloads and ensuring consistent performance.

NVIDIA ACE NIM inference microservices bring digital humans, AI non-playable characters (NPCs) and interactive avatars for customer service to life with generative AI, running on RTX PCs and workstations.

ACE NIM inference microservices for speech — including Riva automatic speech recognition, text-to-speech and neural machine translation — allow accurate transcription, translation and realistic voices.

The NVIDIA Nemotron small language model is a NIM for intelligence that includes INT4 quantization for minimal memory usage and supports roleplay and RAG use cases.

And ACE NIM inference microservices for appearance include Audio2Face and Omniverse RTX for lifelike animation with ultrarealistic visuals. These provide more immersive and engaging gaming characters, as well as more satisfying experiences for users interacting with virtual customer-service agents.

Dive Into NIM

As AI progresses, the ability to rapidly deploy and scale its capabilities will become increasingly crucial.

NVIDIA NIM microservices provide the foundation for this new era of AI application development, enabling breakthrough innovations. Whether building the next generation of AI-powered games, developing advanced natural language processing applications or creating intelligent automation systems, users can access these powerful development tools at their fingertips.

Ways to get started:

  • Experience and interact with NVIDIA NIM microservices on ai.nvidia.com.
  • Join the NVIDIA Developer Program and get free access to NIM for testing and prototyping AI-powered applications.
  • Buy an NVIDIA AI Enterprise license with a free 90-day evaluation period for production deployment and use NVIDIA NIM to self-host AI models in the cloud or in data centers.

Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter.

]]>
Widescreen Wonder: Las Vegas Sphere Delivers Dazzling Displays https://blogs.nvidia.com/blog/sphere-las-vegas/ Tue, 09 Jul 2024 16:00:39 +0000 https://blogs.nvidia.com/?p=72849 Read Article ]]>

Sphere, a new kind of entertainment medium in Las Vegas, is joining the ranks of legendary circular performance spaces such as the Roman Colosseum and Shakespeare’s Globe Theater — captivating audiences with eye-popping LED displays that cover nearly 750,000 square feet inside and outside the venue.

Behind the screens, around 150 NVIDIA RTX A6000 GPUs help power stunning visuals on floor-to-ceiling, 16x16K displays across the Sphere’s interior, as well as 1.2 million programmable LED pucks on the venue’s exterior — the Exosphere, which is the world’s largest LED screen.

Delivering robust network connectivity, NVIDIA BlueField DPUs and NVIDIA ConnectX-6 Dx NICs — along with the NVIDIA DOCA Firefly Service and NVIDIA Rivermax software for media streaming — ensure that all the display panels act as one synchronized canvas.

“Sphere is captivating audiences not only in Las Vegas, but also around the world on social media, with immersive LED content delivered at a scale and clarity that has never been done before,” said Alex Luthwaite, senior vice president of show systems technology at Sphere Entertainment. “This would not be possible without the expertise and innovation of companies such as NVIDIA that are critical to helping power our vision, working closely with our team to redefine what is possible with cutting-edge display technology.”

Named one of TIME’s Best Inventions of 2023, Sphere hosts original Sphere Experiences, concerts and residencies from the world’s biggest artists, and premier marquee and corporate events.

Rock band U2 opened Sphere with a 40-show run that concluded in March. Other shows include The Sphere Experience featuring Darren Aronofsky’s Postcard From Earth, a specially created multisensory cinematic experience that showcases all of the venue’s immersive technologies, including high-resolution visuals, advanced concert-grade sound, haptic seats and atmospheric effects such as wind and scents.

image of the Earth from space displayed in Sphere
“Postcard From Earth” is a multisensory immersive experience. Image courtesy of Sphere Entertainment.

Behind the Screens: Visual Technology Fueling the Sphere

Sphere Studios creates video content in its Burbank, Calif., facility, then transfers it digitally to Sphere in Las Vegas. The content is then streamed in real time to rack-mounted workstations equipped with NVIDIA RTX A6000 GPUs, achieving unprecedented performance capable of delivering three layers of 16K resolution at 60 frames per second.

The NVIDIA Rivermax software helps provide media streaming acceleration, enabling direct data transfers to and from the GPU. Combined, the software and hardware acceleration eliminates jitter and optimizes latency.

NVIDIA BlueField DPUs also facilitate precision timing through the DOCA Firefly Service, which is used to synchronize clocks in a network with sub-microsecond accuracy.

“The integration of NVIDIA RTX GPUs, BlueField DPUs and Rivermax software creates a powerful trifecta of advantages for modern accelerated computing, supporting the unique high-resolution video streams and strict timing requirements needed at Sphere and setting a new standard for media processing capabilities,” said Nir Nitzani, senior product director for networking software at NVIDIA. “This collaboration results in remarkable performance gains, culminating in the extraordinary experiences guests have at Sphere.” 

Well-Rounded: From Simulation to Sphere Stage

To create new immersive content exclusively for Sphere, Sphere Entertainment launched Sphere Studios, which is dedicated to developing the next generation of original immersive entertainment. The Burbank campus consists of numerous development facilities, including a quarter-sized version of Sphere screen in Las Vegas, dubbed Big Dome, which serves as a specialized screening, production facility and lab for content.

dome-shaped building flanked by palm trees
The Big Dome is 100 feet high and 28,000 square feet. Image courtesy of Sphere Entertainment.

Sphere Studios also developed the Big Sky camera system, which captures uncompressed, 18K images from a single camera, so that the studio can film content for Sphere without needing to stitch multiple camera feeds together. The studio’s custom image processing software runs on Lenovo servers powered by NVIDIA A40 GPUs.

The A40 GPUs also fuel creative work, including 3D video, virtualization and ray tracing. To develop visuals for different kinds of shows, the team works with apps including Unreal Engine, Unity, Touch Designer and Notch.

For more, explore upcoming sessions in NVIDIA’s room at SIGGRAPH and watch the panel discussion “Immersion in Sphere: Redefining Live Entertainment Experiences” on NVIDIA On-Demand.

All images courtesy of Sphere Entertainment.

]]>
Into the Omniverse: SyncTwin Helps Democratize Industrial Digital Twins With Generative AI, OpenUSD https://blogs.nvidia.com/blog/synctwin-digital-twins-generative-ai-openusd/ Thu, 27 Jun 2024 13:00:19 +0000 https://blogs.nvidia.com/?p=72742

Editor’s note: This post is part of Into the Omniverse, a series focused on how technical artists, developers and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.

Efficiency and sustainability are critical for organizations looking to be at the forefront of industrial innovation.

To address the digitalization needs of manufacturing and other industries, SyncTwin GmbH — a company that builds software to optimize production, intralogistics and assembly — developed a digital twin app using NVIDIA cuOpt, an accelerated optimization engine for solving complex routing problems, and NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services that enable developers to build OpenUSD-based applications.

SyncTwin is harnessing the power of the extensible OpenUSD framework for describing, composing, simulating, and collaborating within 3D worlds to help its customers create physically accurate digital twins of their factories. The digital twins can be used to optimize production and enhance digital precision to meet industrial performance.

OpenUSD’s Role in Modern Manufacturing

Manufacturing workflows are incredibly complex, making effective communication and integration across various domains pivotal to ensuring operational efficiency. The SyncTwin app provides seamless collaboration capabilities for factory plant managers and their teams, enabling them to optimize processes and resources.

The app uses OpenUSD and Omniverse to help make factory planning and operations easier and more accessible by integrating various manufacturing aspects into a cohesive digital twin. Customers can integrate visual data, production details, product catalogs, orders, schedules, resources and production settings from a variety of file formats all in one place with OpenUSD.

The SyncTwin app creates realistic, virtual environments that facilitate seamless interactions between different sectors of factory operations. This capability enables diverse data—including floorplans from Microsoft PowerPoint and warehouse container data from Excel spreadsheets — to be aggregated in a unified digital twin.

The flexibility of OpenUSD allows for non-destructive editing and composition of complex 3D assets and animations, further enhancing the digital twin.

“OpenUSD is the common language bringing all these different factory domains into a single digital twin,” said Michael Wagner, cofounder and chief technology officer of SyncTwin. “The framework can be instrumental in dismantling data silos and enhancing collaborative efficiency across different factory domains, such as assembly, logistics and infrastructure planning.”

Hear Wagner discuss turning PowerPoint and Excel data into digital twin scenarios using the SyncTwin App in a LinkedIn livestream on July 4 at 11 a.m. CET.

Pioneering Generative AI in Factory Planning

By integrating generative AI into its platform, SyncTwin also provides users with data-driven insights and recommendations, enhancing decision-making processes.

This AI integration automates complex analyses, accelerates operations and reduces the need for manual inputs. Learn more about how SyncTwin and other startups are combining the powers of OpenUSD and generative AI to elevate their technologies in this NVIDIA GTC session.

Hear SyncTwin and NVIDIA experts discuss how digital twins are unlocking new possibilities in this recent community livestream:

By tapping into the power of OpenUSD and NVIDIA’s AI and optimization technologies, SyncTwin is helping set new standards for factory planning and operations, improving operational efficiency and supporting the vision of sustainability and cost reduction across manufacturing.

Get Plugged Into the World of OpenUSD

Learn more about OpenUSD and meet with NVIDIA experts at SIGGRAPH, taking place July 28-Aug. 1 at the Colorado Convention Center and online. Attend these SIGGRAPH highlights:

  • NVIDIA founder and CEO Jensen Huang’s fireside chat on Monday, July 29, covering the latest in generative AI and accelerated computing.
  • OpenUSD Day on Tuesday, July 30, where industry luminaries and developers will showcase how to build 3D pipelines and tools using OpenUSD.
  • Hands-on OpenUSD training for all skill levels.

Check out this video series about how OpenUSD can improve 3D workflows. For more resources on OpenUSD, explore the Alliance for OpenUSD forum and visit the AOUSD website.

Get started with NVIDIA Omniverse by downloading the standard license free, access OpenUSD resources and learn how Omniverse Enterprise can connect teams. Follow Omniverse on Instagram, LinkedIn, Medium and X. For more, join the Omniverse community on the forums, Discord server and YouTube channel. 

Featured image courtesy of SyncTwin GmbH.

]]>
Why 3D Visualization Holds Key to Future Chip Designs https://blogs.nvidia.com/blog/ansys-omniverse-modulus-accelerate-simulation/ Mon, 24 Jun 2024 19:00:44 +0000 https://blogs.nvidia.com/?p=72721 Read Article ]]>

Multi-die chips, known as three-dimensional integrated circuits, or 3D-ICs, represent a revolutionary step in semiconductor design. The chips are vertically stacked to create a compact structure that boosts performance without increasing power consumption.

However, as chips become denser, they present more complex challenges in managing electromagnetic and thermal stresses. To understand and address this, advanced 3D multiphysics visualizations become essential to design and diagnostic processes.

At this week’s Design Automation Conference, a global event showcasing the latest developments in chips and systems, Ansys — a company that develops engineering simulation and 3D design software — will share how it’s using NVIDIA technology to overcome these challenges to build the next generation of semiconductor systems.

To enable 3D visualizations of simulation results for their users, Ansys uses NVIDIA Omniverse, a platform of application programming interfaces, software development kits, and services that enables developers to easily integrate Universal Scene Description (OpenUSD) and NVIDIA RTX rendering technologies into existing software tools and simulation workflows.

The platform powers visualizations of 3D-IC results from Ansys solvers so engineers can evaluate phenomena like electromagnetic fields and temperature variations to optimize chips for faster processing, increased functionality and improved reliability.

With Ansys Icepak on the NVIDIA Omniverse platform, engineers can simulate temperatures across a chip according to different power profiles and floor plans. Finding chip hot-spots can lead to better design of the chips themselves, as well as auxiliary cooling devices. However, these 3D-IC simulations are computationally intensive, limiting the number of simulations and design points users can explore.

Using NVIDIA Modulus, combined with novel techniques for handling arbitrary power patterns in the Ansys RedHawk-SC electrothermal data pipeline and model training framework, the Ansys R&D team is exploring the acceleration of simulation workflows with AI-based surrogate models. Modulus is an open-source AI framework for building, training and fine-tuning physics-ML models at scale with a simple Python interface.

With the NVIDIA Modulus Fourier neural operator (FNO) architecture, which can parameterize solutions for a distribution of partial differential equations, Ansys researchers created an AI surrogate model that efficiently predicts temperature profiles for any given power profile and a given floor plan defined by system parameters like heat transfer coefficient, thickness and material properties. This model offers near real-time results at significantly reduced computational costs, allowing Ansys users to explore a wider design space for new chips.

Ansys uses a 3D FNO model to infer temperatures on a chip surface for unseen power profiles, a given die height and heat-transfer coefficient boundary condition.

Following a successful proof of concept, the Ansys team will explore integration of such AI surrogate models for its next-generation RedHawk-SC platform using NVIDIA Modulus.

As more surrogate models are developed, the team will also look to enhance model generality and accuracy through in-situ fine-tuning. This will enable RedHawk-SC users to benefit from faster simulation workflows, access to a broader design space and the ability to refine models with their own data to foster innovation and safety in product development.

To see the joint demonstration of 3D-IC multiphysics visualization using NVIDIA Omniverse APIs, visit Ansys at the Design Automation Conference, running June 23-27, in San Francisco at booth 1308 or watch the presentation at the Exhibitor Forum.

]]>
Decoding How NVIDIA AI Workbench Powers App Development https://blogs.nvidia.com/blog/ai-decoded-workbench-hybrid-rag/ Wed, 19 Jun 2024 13:00:25 +0000 https://blogs.nvidia.com/?p=72261

Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible and showcases new hardware, software, tools and accelerations for NVIDIA RTX PC and workstation users.

The demand for tools to simplify and optimize generative AI development is skyrocketing. Applications based on retrieval-augmented generation (RAG) — a technique for enhancing the accuracy and reliability of generative AI models with facts fetched from specified external sources — and customized models are enabling developers to tune AI models to their specific needs.

While such work may have required a complex setup in the past, new tools are making it easier than ever.

NVIDIA AI Workbench simplifies AI developer workflows by helping users build their own RAG projects, customize models and more. It’s part of the RTX AI Toolkit — a suite of tools and software development kits for customizing, optimizing and deploying AI capabilities — launched at COMPUTEX earlier this month. AI Workbench removes the complexity of technical tasks that can derail experts and halt beginners.

What Is NVIDIA AI Workbench?

Available for free, NVIDIA AI Workbench enables users to develop, experiment with, test and prototype AI applications across GPU systems of their choice — from laptops and workstations to data center and cloud. It offers a new approach for creating, using and sharing GPU-enabled development environments across people and systems.

A simple installation gets users up and running with AI Workbench on a local or remote machine in just minutes. Users can then start a new project or replicate one from the examples on GitHub. Everything works through GitHub or GitLab, so users can easily collaborate and distribute work. Learn more about getting started with AI Workbench.

How AI Workbench Helps Address AI Project Challenges

Developing AI workloads can require manual, often complex processes, right from the start.

Setting up GPUs, updating drivers and managing versioning incompatibilities can be cumbersome. Reproducing projects across different systems can require replicating manual processes over and over. Inconsistencies when replicating projects, like issues with data fragmentation and version control, can hinder collaboration. Varied setup processes, moving credentials and secrets, and changes in the environment, data, models and file locations can all limit the portability of projects.

AI Workbench makes it easier for data scientists and developers to manage their work and collaborate across heterogeneous platforms. It integrates and automates various aspects of the development process, offering:

  • Ease of setup: AI Workbench streamlines the process of setting up a developer environment that’s GPU-accelerated, even for users with limited technical knowledge.
  • Seamless collaboration: AI Workbench integrates with version-control and project-management tools like GitHub and GitLab, reducing friction when collaborating.
  • Consistency when scaling from local to cloud: AI Workbench ensures consistency across multiple environments, supporting scaling up or down from local workstations or PCs to data centers or the cloud.

RAG for Documents, Easier Than Ever

NVIDIA offers sample development Workbench Projects to help users get started with AI Workbench. The hybrid RAG Workbench Project is one example: It runs a custom, text-based RAG web application with a user’s documents on their local workstation, PC or remote system.

Every Workbench Project runs in a “container” — software that includes all the necessary components to run the AI application. The hybrid RAG sample pairs a Gradio chat interface frontend on the host machine with a containerized RAG server — the backend that services a user’s request and routes queries to and from the vector database and the selected large language model.

This Workbench Project supports a wide variety of LLMs available on NVIDIA’s GitHub page. Plus, the hybrid nature of the project lets users select where to run inference.

Workbench Projects let users version the development environment and code.

Developers can run the embedding model on the host machine and run inference locally on a Hugging Face Text Generation Inference server, on target cloud resources using NVIDIA inference endpoints like the NVIDIA API catalog, or with self-hosting microservices such as NVIDIA NIM or third-party services.

The hybrid RAG Workbench Project also includes:

  • Performance metrics: Users can evaluate how RAG- and non-RAG-based user queries perform across each inference mode. Tracked metrics include Retrieval Time, Time to First Token (TTFT) and Token Velocity.
  • Retrieval transparency: A panel shows the exact snippets of text — retrieved from the most contextually relevant content in the vector database — that are being fed into the LLM and improving the response’s relevance to a user’s query.
  • Response customization: Responses can be tweaked with a variety of parameters, such as maximum tokens to generate, temperature and frequency penalty.

To get started with this project, simply install AI Workbench on a local system. The hybrid RAG Workbench Project can be brought from GitHub into the user’s account and duplicated to the local system.

More resources are available in the AI Decoded user guide. In addition, community members provide helpful video tutorials, like the one from Joe Freeman below.

Customize, Optimize, Deploy

Developers often seek to customize AI models for specific use cases. Fine-tuning, a technique that changes the model by training it with additional data, can be useful for style transfer or changing model behavior. AI Workbench helps with fine-tuning, as well.

The Llama-factory AI Workbench Project enables QLoRa, a fine-tuning method that minimizes memory requirements, for a variety of models, as well as model quantization via a simple graphical user interface. Developers can use public or their own datasets to meet the needs of their applications.

Once fine-tuning is complete, the model can be quantized for improved performance and a smaller memory footprint, then deployed to native Windows applications for local inference or to NVIDIA NIM for cloud inference. Find a complete tutorial for this project on the NVIDIA RTX AI Toolkit repository.

Truly Hybrid — Run AI Workloads Anywhere

The Hybrid-RAG Workbench Project described above is hybrid in more than one way. In addition to offering a choice of inference mode, the project can be run locally on NVIDIA RTX workstations and GeForce RTX PCs, or scaled up to remote cloud servers and data centers.

The ability to run projects on systems of the user’s choice — without the overhead of setting up the infrastructure — extends to all Workbench Projects. Find more examples and instructions for fine-tuning and customization in the AI Workbench quick-start guide.

For a more technical perspective, read the blog Optimize AI Model Performance and Maintain Data Privacy with Hybrid RAG.

Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter.

]]>
NVIDIA Advances Physical AI at CVPR With Largest Indoor Synthetic Dataset https://blogs.nvidia.com/blog/ai-city-challenge-omniverse-cvpr/ Mon, 17 Jun 2024 13:00:09 +0000 https://blogs.nvidia.com/?p=72204 Read Article ]]>

NVIDIA contributed the largest ever indoor synthetic dataset to the Computer Vision and Pattern Recognition (CVPR) conference’s annual AI City Challenge — helping researchers and developers advance the development of solutions for smart cities and industrial automation.

The challenge, garnering over 700 teams from nearly 50 countries, tasks participants to develop AI models to enhance operational efficiency in physical settings, such as retail and warehouse environments, and intelligent traffic systems.

Teams tested their models on the datasets that were generated using NVIDIA Omniverse, a platform of application programming interfaces (APIs), software development kits (SDKs) and services that enable developers to build Universal Scene Description (OpenUSD)-based applications and workflows.

Creating and Simulating Digital Twins for Large Spaces

In large indoor spaces like factories and warehouses, daily activities involve a steady stream of people, small vehicles and future autonomous robots. Developers need solutions that can observe and measure activities, optimize operational efficiency, and prioritize human safety in complex, large-scale settings.

Researchers are addressing that need with computer vision models that can perceive and understand the physical world. It can be used in applications like multi-camera tracking, in which a model tracks multiple entities within a given environment.

To ensure their accuracy, the models must be trained on large, ground-truth datasets for a variety of real-world scenarios. But collecting that data can be a challenging, time-consuming and costly process.

AI researchers are turning to physically based simulations — such as digital twins of the physical world — to enhance AI simulation and training. These virtual environments can help generate synthetic data used to train AI models. Simulation also provides a way to run a multitude of “what-if” scenarios in a safe environment while addressing privacy and AI bias issues.

Creating synthetic data is important for AI training because it offers a large, scalable, and expandable amount of data. Teams can generate a diverse set of training data by changing many parameters including lighting, object locations, textures and colors.

Building Synthetic Datasets for the AI City Challenge

This year’s AI City Challenge consists of five computer vision challenge tracks that span traffic management to worker safety.

NVIDIA contributed datasets for the first track, Multi-Camera Person Tracking, which saw the highest participation, with over 400 teams. The challenge used a benchmark and the largest synthetic dataset of its kind — comprising 212 hours of 1080p videos at 30 frames per second spanning 90 scenes across six virtual environments, including a warehouse, retail store and hospital.

Created in Omniverse, these scenes simulated nearly 1,000 cameras and featured around 2,500 digital human characters. It also provided a way for the researchers to generate data of the right size and fidelity to achieve the desired outcomes.

The benchmarks were created using Omniverse Replicator in NVIDIA Isaac Sim, a reference application that enables developers to design, simulate and train AI for robots, smart spaces or autonomous machines in physically based virtual environments built on NVIDIA Omniverse.

Omniverse Replicator, an SDK for building synthetic data generation pipelines, automated many manual tasks involved in generating quality synthetic data, including domain randomization, camera placement and calibration, character movement, and semantic labeling of data and ground-truth for benchmarking.

Ten institutions and organizations are collaborating with NVIDIA for the AI City Challenge:

  • Australian National University, Australia
  • Emirates Center for Mobility Research, UAE
  • Indian Institute of Technology Kanpur, India
  • Iowa State University, U.S.
  • Johns Hopkins University, U.S.
  • National Yung-Ming Chiao-Tung University, Taiwan
  • Santa Clara University, U.S.
  • The United Arab Emirates University, UAE
  • University at Albany – SUNY, U.S.
  • Woven by Toyota, Japan

Driving the Future of Generative Physical AI 

Researchers and companies around the world are developing infrastructure automation and robots powered by physical AI — which are models that can understand instructions and autonomously perform complex tasks in the real world.

Generative physical AI uses reinforcement learning in simulated environments, where it perceives the world using accurately simulated sensors, performs actions grounded by laws of physics, and receives feedback to reason about the next set of actions.

Developers can tap into developer SDKs and APIs, such as the NVIDIA Metropolis developer stack — which includes a multi-camera tracking reference workflow — to add enhanced perception capabilities for factories, warehouses and retail operations. And with the latest release of NVIDIA Isaac Sim, developers can supercharge robotics workflows by simulating and training AI-based robots in physically based virtual spaces before real-world deployment.

Researchers and developers are also combining high-fidelity, physics-based simulation with advanced AI to bridge the gap between simulated training and real-world application. This helps ensure that synthetic training environments closely mimic real-world conditions for more seamless robot deployment.

NVIDIA is taking the accuracy and scale of simulations further with the recently announced NVIDIA Omniverse Cloud Sensor RTX, a set of microservices that enable physically accurate sensor simulation to accelerate the development of fully autonomous machines.

This technology will allow autonomous systems, whether a factory, vehicle or robot, to gather essential data to effectively perceive, navigate and interact with the real world. Using these microservices, developers can run large-scale tests on sensor perception within realistic, virtual environments, significantly reducing the time and cost associated with real-world testing.

Omniverse Cloud Sensor RTX microservices will be available later this year. Sign up for early access.

Showcasing Advanced AI With Research

Participants submitted research papers for the AI City Challenge and a few achieved top rankings, including:

All accepted papers will be presented at the AI City Challenge 2024 Workshop, taking place on June 17.

At CVPR 2024, NVIDIA Research will present over 50 papers, introducing generative physical AI breakthroughs with potential applications in areas like autonomous vehicle development and robotics.

Papers that used NVIDIA Omniverse to generate synthetic data or digital twins of environments for model simulation, testing and validation include:

Read more about NVIDIA Research at CVPR, and learn more about the AI City Challenge.

Get started with NVIDIA Omniverse by downloading the standard license free, access OpenUSD resources and learn how Omniverse Enterprise can connect teams. Follow Omniverse on Instagram, Medium, LinkedIn and X. For more, join the Omniverse community on the forums, Discord server, Twitch and YouTube channels.

]]>
Creativity Accelerated: New RTX-Powered AI Hardware and Software Announced at COMPUTEX https://blogs.nvidia.com/blog/rtx-ai-pc-studio-computex/ Wed, 05 Jun 2024 13:00:59 +0000 https://blogs.nvidia.com/?p=72029

NVIDIA launched NVIDIA Studio at COMPUTEX in 2019. Five years and more than 500 NVIDIA RTX-accelerated apps and games later, it’s bringing AI to even more creators with an array of new RTX technology integrations announced this week at COMPUTEX 2024.

Newly announced NVIDIA GeForce RTX AI laptops — including the ASUS ProArt PX13 and P16 and MSI Stealth 16 AI+ laptops — will feature dedicated RTX Tensor Cores to accelerate AI performance and power-efficient systems-on-a-chip with Windows 11 AI PC features. They join over 200 laptops already accelerated with RTX AI technology.

NVIDIA RTX Video, a collection of technologies including RTX Video Super Resolution and RTX Video HDR that enhance video content streamed in browsers like Google Chrome, Microsoft Edge and Mozilla Firefox, is coming to the free VLC Media Player. And for the first time in June, creators can enjoy these AI-enhanced video effects in popular creative apps like DaVinci Resolve and Wondershare Filmora.

DaVinci Resolve and Cyberlink PowerDirector are adding NVIDIA’s new H.265 Ultra-High-Quality (UHQ) mode, which uses the NVIDIA NVENC to increase high-efficiency video coding (HEVC) and encoding efficiency by 10%.

NVIDIA RTX Remix, a modding platform for remastering classic games with RTX, will soon be made open source, allowing more modders to streamline how assets are replaced and scenes are relit. RTX Remix will also be made accessible via a new REST application programming interface (API) to connect the platform to other modding tools like Blender and Hammer.

Creative apps are continuing to adopt AI-powered NVIDIA DLSS for higher-quality ray-traced visuals in the viewport, with 3D modeling platform Womp being the latest to integrate DLSS 3.5 with Ray Reconstruction.

NVIDIA unveiled Project G-Assist, an RTX-powered AI-assistant technology demo that provides context-aware help for PC games and apps.

The new NVIDIA app beta update adds 120 frames per second AV1 video capture and one-click performance-tuning.

And the latest Game Ready Driver and NVIDIA Studio Driver are available for installation today.

Video Gets the AI Treatment

RTX Video is a collection of real-time, AI-based video enhancements — powered by RTX GPUs equipped with AI Tensor Cores — to dramatically improve video quality.

It includes RTX Video Super Resolution — an upscaling technology that removes compression artifacts and generates additional pixels to improve video sharpness and clarity up to 4K — and RTX Video HDR, which transforms standard dynamic range videos into stunning high-dynamic range on HDR10 displays.

NVIDIA has released the RTX Video software development kit, which allows app developers to add RTX Video effects to creator workflows.

Blackmagic Design’s DaVinci Resolve, a powerful video editing app with color correction, visual effects, graphics and audio post-production capabilities, will be one of the first to integrate RTX Video. The integration is being demoed on the COMPUTEX show floor.

Wondershare Filmora, a video editing app with AI tools and pro-level social media video editing features, will support RTX Video HDR, coming soon.

Wondershare Filmora will soon support RTX Video HDR.

VLC Media Player — an open-source, cross-platform media player, has added RTX Video HDR in its latest beta release, following its recently added support for Mozilla Firefox.

NVIDIA hardware encoders deliver a generational boost in encoding efficiency to HEVC. Performance tested on dual Xeon Gold-6140@2.3GHz running NVIDIA L4 Tensor Core GPUs with driver 520.65.

NVIDIA also released a new UHQ mode in NVENC, a dedicated hardware encoder on RTX GPUs, for the HEVC video compression standard (also known as H.265). The new mode increases compression by 10% without diminishing quality, making NVENC HEVC 34% more efficient than the typically used x264 Medium compression standard.

DaVinci Resolve and Cyberlink PowerDirector video editing software will be adding support for the new UHQ mode in their next updates. Stay tuned for official launch dates.

RTX Remix Open Sources Creator Toolkit

NVIDIA RTX Remix allows modders to easily capture game assets, automatically enhance materials with generative AI tools and create stunning RTX remasters with full ray tracing.

RTX Remix open beta recently added DLSS 3.5 support featuring Ray Reconstruction, an AI model that creates higher-quality images for intensive ray-traced games and apps.

Later this month, NVIDIA will make the RTX Remix Toolkit open source, allowing more modders to streamline how assets are replaced and scenes are relit. The company is also increasing the supported file formats for RTX Remix’s asset ingestor and bolstering RTX Remix’s AI Texture Tools with new models.

The RTX Remix toolkit is now completely open source.

NVIDIA is also making the capabilities of RTX Remix accessible via a new powerful REST API, allowing modders to livelink RTX Remix to other DCC tools such as Blender and modding tools such as Hammer. NVIDIA is also providing an SDK for the RTX Remix runtime to allow modders to deploy RTX Remix’s renderer into other applications and games beyond DirectX 8 and 9 classics.

Catch Some Rays

NVIDIA DLSS 3.5 with Ray Reconstruction enhances ray-traced image quality on NVIDIA RTX and GeForce RTX GPUs by replacing hand-tuned denoisers with an NVIDIA supercomputer-trained AI network that generates higher-quality pixels in between sampled rays.

Previewing content in the viewport, even with high-end hardware, can sometimes offer less-than-ideal image quality, as traditional denoisers require hand-tuning for every scene. With DLSS 3.5, the AI neural network recognizes a wide variety of scenes, producing high-quality preview images and drastically reducing time spent rendering.

The free browser-based 3D modeling platform Womp has added DLSS 3.5 to enhance interactive, photorealistic modeling in the viewport.

DLSS 3.5 with Ray Reconstruction unlocks sharper visuals in the viewport.

Chaos Vantage and D5 Render, two popular professional-grade 3D apps that feature real-time preview modes with ray tracing, have also seen drastic performance increases with DLSS 3.5 — up to a 60% boost from Ray Reconstruction and 4x from all DLSS technologies.

Tools That Accelerate AI Apps

The vast ecosystem of open-source AI models currently available are usually pretrained for general purposes and run in data centers.

To create more effective app-specific AI tools that run on local PCs, NVIDIA has introduced the RTX AI Toolkit — an end-to-end workflow for the customization, optimization and deployment of AI models on RTX AI PCs.

Partners such as Adobe, Topaz and Blackmagic Design are integrating RTX AI Toolkit within their popular creative apps to accelerate AI performance on RTX PCs.

Developers can learn more on the NVIDIA Technical Blog.

]]>
Decoding How NVIDIA RTX AI PCs and Workstations Tap the Cloud to Supercharge Generative AI https://blogs.nvidia.com/blog/ai-decoded-rtx-pc-hybrid/ Wed, 29 May 2024 13:00:53 +0000 https://blogs.nvidia.com/?p=71827

Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for GeForce RTX PC and RTX workstation users.

Generative AI is enabling new capabilities for Windows applications and games. It’s powering unscripted, dynamic NPCs, it’s enabling creators to generate novel works of art, and it’s helping gamers boost frame rates by up to 4x. But this is just the beginning.

As the capabilities and use cases for generative AI continue to grow, so does the demand for compute to support it.

Hybrid AI combines the onboard AI acceleration of NVIDIA RTX with scalable, cloud-based GPUs to effectively and efficiently meet the demands of AI workloads.

Hybrid AI, a Love Story

With growing AI adoption, app developers are looking for deployment options: AI running locally on RTX GPUs delivers high performance and low latency, and is always available — even when not connected to the internet. On the other hand, AI running in the cloud can run larger models and scale across many GPUs, serving multiple clients simultaneously. In many cases, a single application will use both.

Hybrid AI is a kind of matchmaker that harmonizes local PC and workstation compute with cloud scalability. It provides the flexibility to optimize AI workloads based on specific use cases, cost and performance. It helps developers ensure that AI tasks run where it makes the most sense for their specific applications.

Whether the AI is running locally or in the cloud it gets accelerated by NVIDIA GPUs and NVIDIA’s AI stack, including TensorRT and TensorRT-LLM. That means less time staring at pinwheels of death and more opportunity to deliver cutting-edge, AI powered features to users.

A range of NVIDIA tools and technologies support hybrid AI workflows for creators, gamers, and developers.

Dream in the Cloud, Bring to Life on RTX

Generative AI has demonstrated its ability to help artists ideate, prototype and brainstorm new creations. One such solution, the cloud-based Generative AI by iStock — powered by NVIDIA Edify — is a generative photography service that was built for and with artists, training only on licensed content and with compensation for artist contributors.

Generative AI by iStock goes beyond image generation, providing artists with extensive tools to explore styles, variations, modify parts of an image or expand the canvas. With all these tools, artists can ideate numerous times and still bring ideas to life quickly.

Once the creative concept is ready, artists can bring it back to their local systems. RTX-powered PCs and workstations offer artists AI acceleration in more than 125 of the top creative apps to realize the full vision — whether it’s creating an amazing piece of artwork in Photoshop with local AI tools, animating the image with a parallax effect in DaVinci Resolve, or building a 3D scene with the reference image in Blender with ray tracing acceleration, and AI denoising in Optix.

Hybrid ACE Brings NPCs to Life

Hybrid AI is also enabling a new realm of interactive PC gaming with NVIDIA ACE, allowing game developers and digital creators to integrate state-of-the-art generative AI models into digital avatars on RTX AI PCs.

Powered by AI neural networks, NVIDIA ACE lets developers and designers create non-playable characters (NPCs) that can understand and respond to human player text and speech. It leverages AI models, including speech-to-text models to handle natural language spoken aloud, to generate NPCs’ responses in real time.

A Hybrid Developer Tool That Runs Anywhere

Hybrid also helps developers build and tune new AI models. NVIDIA AI Workbench helps developers quickly create, test and customize pretrained generative AI models and LLMs on RTX GPUs. It offers streamlined access to popular repositories like Hugging Face, GitHub and NVIDIA NGC, along with a simplified user interface that enables data scientists and developers to easily reproduce, collaborate on and migrate projects.

Projects can be easily scaled up when additional performance is needed — whether to the data center, a public cloud or NVIDIA DGX Cloud — and then brought back to local RTX systems on a PC or workstation for inference and light customization. Data scientists and developers can leverage pre-built Workbench projects to chat with documents using retrieval-augmented generation (RAG), customize LLMs using fine-tuning, accelerate data science workloads with seamless CPU-to-GPU transitions and more.

The Hybrid RAG Workbench project provides a customizable RAG application that developers can run and adapt themselves. They can embed their documents locally and run inference either on a local RTX system, a cloud endpoint hosted on NVIDIA’s API catalog or using NVIDIA NIM microservices. The project can be adapted to use various models, endpoints and containers, and provides the ability for developers to quantize models to run on their GPU of choice.

NVIDIA GPUs power remarkable AI solutions locally on NVIDIA GeForce RTX PCs and RTX workstations and in the cloud. Creators, gamers and developers can get the best of both worlds with growing hybrid AI workflows.

Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter.

]]>