New Chronos AI model for frame rate conversion – Easily adjust motion in your clips by slowing them down by up to 2000%. Convert the frame rate of your source footage to conform to your project requirements (e.g. 23.97 fps → 30 fps) or add even smoother motion by converting to higher frame rates (e.g. 60 fps or 120 fps)
New Proteus AI model for fine-tuned enhancement control – Get even more control over the output quality of your clips with six customizable sliders. New controls help reduce halos from over-sharpening, recover lost detail due to compression, and minimize aliasing that can result in less sharp footage.
New presets manager – Save and load specific settings for the AI models you use most. Download, share, and import presets with other Video Enhance AI users.
AI model picker – Select the options that best represent your source clip and we’ll provide you with the AI models that can help enhance it.
Comparison View – Compare the results of three AI models alongside your original clip in one convenient view. Then select the best one to apply to your footage without changing views.
Performance and usability improvements – We’ve made many improvements to our AI engine for better speed and stability on across a range of hardware, as well as included several helpful usability improvements.
New Chronos AI model for frame rate conversion
Realistic slow motion made easy
With most other video editing applications, slowing down footage from a 24 fps clip would result in distracting stutters and dropped frames. However, Chronos leverages the power of AI to create beautiful slow motion results regardless of the source frame rate.
Watch how to use Chronos to slow down your video, convert frame rate, and learn best practices to ensure you get great results:
Take the guesswork out of frame rate conversion
Working with clips that have varying frame rates is a reality in video editing and the process of conforming them to your project requirements used to be a daunting task. Now, Chronos will take care of the heavy lifting involved with realistically converting the frame rate of your source clips to conform to your project requirements. Adjust a clip’s frame rate to 60, 100, or 120 fps to get even smoother motion blur between frames.
Let’s take a look at how the new Chronos AI model handles frame rate conversion compared to another leading video editing application, Apple Final Cut Pro. The source video was filmed at 24 fps. On the left half of the video below, we let Final Cut Pro convert the original 24 fps clip to 60 fps. Notice the slight jitter, especially when the speed of the UAV increases. For the right half of the video, we used Chronos to perform the same 60 fps conversion. Notice how much smoother the motion is when using Chronos as opposed to Final Cut Pro.
New Proteus AI model for fine-tuned enhancement control
One of the most requested features we’ve received is to provide greater control over how to improve your video clips. The new Proteus AI model was built specifically to meet this request. Control six discrete sliders to fine-tune de-blocking, detail recovery, sharpening, noise reduction, de-haloing, and anti-aliasing. Select the most ideal frame in your clip and click the “Auto” button, letting our AI engine analyze the selected frame and provide recommended settings based on what’s displayed.
New presets manager
Quickly load your favorite AI model settings
If you’ve found yourself constantly having to select the same AI model and settings over and over, you’re going to love the new Preset Manager. Configure the settings of your favorite AI model, save that snapshot as a preset, and quickly load on subsequent projects. Share your favorite presets with other Video Enhance AI users and import presets shared with you.
New AI model picker and comparison view
Choose the right AI models with confidence
It can sometimes be challenging to decide which AI model would work best with your video clip. To make that task easier, we’ve built two helpful features: the AI model selector and the comparison view. With the AI model selector, choose the type of clip you want to enhance and which issue most affects it. We’ll suggest three AI models that are best suited for the task based on your selections. On top of that, a new comparison view conveniently displays those three recommended AI models simultaneously, giving you even more confidence when choosing the best one for your clip.
Performance and usability improvements
Thanks to improvements to our AI Engine pipeline, you’ll experience a 50% performance boost with NVIDIA GeForce GTX GPUs and up to 3x faster on Apple M1 silicon computers. We’ve also included a number of usability improvements directly based on user feedback including displaying the process completion time in “hours:minutes:seconds” instead of just seconds and adding the ability to switch between showing frames and timecodes in the UI.
We’re constantly striving to make Video Enhance AI smarter and more effective when improving the quality of your video footage, whether it’s bringing out details from your old family videos, upscaling a Zoom video recording, or adjusting the resolution, speed and frame rate of cinematic footage. Be sure to tag us on social media and use #TopazVEAI so that we can see your amazing work! If you don’t yet own Video Enhance AI, purchase a copy or download a free trial.
New Severe Noise AI model – tackle photos suffering from excessive noise due to very high ISO settings and low light conditions
Improved Comparison View – Select which AI models are displayed to quickly choose the one that best suits your image
Apple M1 plugin support in Adobe Photoshop – Access the DeNoise AI plugin from within Photoshop running natively on Apple M1 silicon without needing to use Rosetta emulation
Snappier performance – Up to 3x CPU and 12x GPU improvements on Windows and 4x improvements with Macs using discrete GPUs when compared to DeNoise AI 2.4.2
You know the feeling you get when you set your camera to ISO 8000 (or higher) and in the back of your mind, you’re sure that the shot you’re about to take is going to be a “throw-away” because of the crazy amount of noise it’ll undoubtedly have? But, you take the photo anyway and can already anticipate the disappointment when you try to get rid of all that noise. Still, you’re really digging the photo despite the excessive noise and, at this point, you’ll gladly try anything to salvage it.
If you can relate, then we think you’ll be very happy about the latest update to DeNoise AI v3.1. We want to help you turn photos that you never thought would see the light of day into something you’ll be excited to share with your family and friends. Let’s take a closer look at what we cooked up.
Making sense of the noise
The last thing we want is for you to throw away a photo you really love because it is riddled with excessive noise. It has happened too many times already and we wanted to do something about it. So, we built the Severe Noise AI model for those times when you’re looking for that one Hail Mary to save the day (or just your super-noisy photo).
The first challenge is learning which photos work best with Severe Noise. Unlike Low Light, our other high ISO AI model, Severe Noise works best on images that suffer from an extremely large amount of noise. Typically, photos taken at ISO 6400 and higher will have an excessive amount of noise that can negatively affect the end result.
Next, we had to figure out how to remove both the luminance and color noise that typically affects super high ISO photos without obliterating the natural edge details of your subjects. So, we fed our AI models with thousands of super noisy photos, letting it learn to identify the signal from the noise.
Our hope is that with this new AI model, you’ll be able to recover all sorts of photos previously overlooked due to excessive noise.
A new way to compare AI models
Sometimes, it can be hard to see the effects that each of the four AI models in DeNoise AI have on your photos when switching between them. That’s where Comparison View comes in. We built it to quickly show you the results of multiple AI models in one convenient view. When you can see multiple AI models at once, it’s that much easier to decide which one works best with your photo.
Remarkable intelligent noise reduction
We’re constantly striving to make DeNoise AI smarter and more effective in removing noise based on the specific need of your photos. Since 2018, we’ve made more than 100 new or substantially improved AI models for image quality and they’re only going to get better. We’re really excited to see how you use DeNoise AI to make your photos look their best. Be sure to tag us on social media and use #DeNoiseAI so that we can see your amazing work! If you don’t yet own DeNoise AI, purchase a copy or download a free trial.
Here at Topaz Labs, we train modern AI models to “auto-magically” enhance images and video, and deliver them to our customers in desktop applications that run on their own hardware. This provides a lot of value to our customers, but raises new challenges to our engineers in ensuring reliable software delivery.
We recently released new versions of our Sharpen AI and DeNoise AI applications using a next-generation version of our AI Engine. The AI Engine is the part of our software which takes in the AI model, and processes an image through it using a variety of hardware-optimized libraries. This new version of the AI Engine has resulted in some incredible improvements in speed, but unfortunately the launch of the 3.0 versions of Sharpen AI and DeNoise AI was rougher than expected.
We ran into problems where after the initial release, certain combinations of operating systems, CPUs, GPUs, and drivers resulted in crashes or sub-optimal performance. Our customers posted several issues on our forums, and our support ticket volumes sky-rocketed.
Fortunately, our team rallied, and isolated a few of the major reproducible issues. And our engineers quickly diagnosed and patched numerous issues in rapid order. However, for many of our customers, the damage was already done: they had gone through an upgrade of our software, and gotten a new version that was worse than what they had had before.
The issues in our release products affected our customers across three dimensions:
The severity of the issue for a given user The number of users affected The length of time that a given user is affected
Each one of those dimensions has a multiplying effect on the negative impact of a given released bug, and unfortunately these releases suffered from larger than hoped-for effects across all three dimensions.
Now that the dust is settling on those recent releases – the team met and we came up with some ways to improve the quality of our product releases in a way that also still enables us to innovate quickly on new enhancements to our products:
We recently had engaged a new Quality Assurance partner. But unfortunately we didn’t ask them to test across a diverse enough set of operating systems, CPUs, and GPUs. We also made large changes to our software, and the way that it is installed for our users, after the testing runs had already completed.
To address these issues, we’ve updated the testing plans to incorporate 5-times the number of tested types of machines, and instituted a stricter feature-freeze period between testing and release. This will ensure that we catch more issues earlier, and that we minimize the chances that new issues are introduced after testing.
We’ve also adjusted our release schedule, so that products with similar new functionality will go out at different times. This means that if issues with a new common piece of functionality, like this new AI Engine, are encountered, that they one affect one product at a time, and can be fixed for that one product.
One of the new features that we rolled out with the new product installers, is the ability to optimize software updates. This means that when you update a piece of Topaz software, that the installer only downloads the parts of the application which have changed since the last version.
We’ve also changed our release processes to enable more “hotfixes”, which represent a fix that the engineers feel is highly impactful for our customers, and has a low risk of introducing new problems. Close followers of past Topaz releases may have seen us do these before, we’re just further enabling the team to be able to get these fixes out faster to our customers.
Continuous Improvement through Transparency
At Topaz Labs, we feel that some of the best lessons are learned when things do not go well. So we endeavour to foster open and honest communication internally towards learning those lessons as a team. We also aim to keep our promise of high-quality software delivery to our customers, and be honest when we feel we have fallen short.
Hopefully others may find some of the lessons we’ve learned useful for their own projects. We’ve certainly been encouraged that we can take steps towards even better releases in the future.