News and updates from Adobe MAX..

Collaboration & Machine Learning

There are some huge tools that allow for more collaboration shown at Adobe’s MAX conference. Chief amongst these is Frame.io now being part of the Adobe family. With COVID and the push to remote working, there have also been major advances in collaboration with their core creative tools such as Adobe Photoshop.

Photoshop & Illustrator on the Web

You can now edit and collaborate with Photoshop in the cloud. This is not just sharing files, it is using a cloud version of Photoshop or Illustrator in the browser. It is not the full Photoshop experience but it is a solid set of essential tools that cover many common operations. When you are sharing a Photoshop document you have the option to click if you wish to download and edit it using your own local version of Photoshop or edit the file in the cloud via the browser.

Photoshop Desktop:

Adobe’s Sensei AI machine learning core is now 5 years old and Adobe continues to offer new ML tools especially via Photoshop’s Neural Filters.

New Neural Filters

Neural Filters were released last year in Photoshop. Neural Filters is a workspace inside Photoshop that introduces new non-destructive filters that use Sensei AI machine learning to help you explore creative ideas, oftentimes as Beta or early test versions of Sensei applications. Since then there has been a rapidly growing and improving library of artistic and restorative filters in Photoshop which aim to speed parts of an artist’s workflow.

Landscape Mixer (Beta)

Landscape Mixer (beta) allows users to create entirely new scenes, or concept art by combining any two landscape images together. To create the same effect manually would require many hours of additional effort, but the Landscape Mixer Neural Filter is virtually automatic and very fast.

  • Using it you can adjust the season of a scene by creating winterscape from a sunny summer day, or change the leaves on trees from summery green to autumn colors.
  • Users can alter the time of day of a landscape by giving a mid-day photo a golden-hour sunset.
  • People or other subjects in the scene, can be automatically masked and harmonized with the new scene.

Harmonization (Beta)

This filter matches the color and tone of an element from one layer to another layer using machine learning and Adobe Sensei. The Harmonization Neural Filter composites by intelligently adjusting the hue and luminosity of the layers.

Color Transfer (Beta)

Color Transfer takes the color palette of an image and applies it to a different image. This is a major timesaver for a very common workflow of color matching and provides a great starting point for subsequent grading.

Hover Auto-masking Object Selection Tool

The Object Selection Tool has been significantly improved. Now an artist can just hover over the object they want to select in the image and with ML segmentation Photoshop will select it. The tool can detect most objects within an image but not all of them, as yet. Adobe is using their new improved Sensei AI machine learning segmentation models to detect objects. Under the hood, Sensei AI is more than just one technology or one ML approach. It is a blanket term covering a range of ML approaches, segmentation, style transfer, object recognition and more.

  • Selections made with the Object Selection Tool are more accurate and preserve more details in the edges of the selection, than before. The tool still works better on well-defined objects than on regions without contrast.
  • If any objects are not detected or only partially detected, a user can simply click and drag a marquee over any of the additional areas to select.

One-Click Masks All Objects in a Layer

There is also a new menu item that will improve selection and masking speed. Adobe has added an option to mask all objects in a layer. This leverages the new marts of the Object Selection Tool. Users Choose Layer>Mask All Objects to generate masks for all the objects detected within a layer with a single click.

Content Authenticity Initiative (CAI)

Adobe is working to combat misinformation through digital content provenance. The CAI is a community of hundreds of media and tech companies developing the industry standard for the provenance of digital imagery and other file types. It was founded by Adobe, The New York Times Co., and Twitter in 2019. Faced with increasing challenges to media integrity, the CAI provides creators and consumers with a simple, reliable method to determine the authenticity of content to bolster trust. Adobe is creating the open industry standard using tamper-evident metadata to verify content authenticity called Content Credentials.

You can now inspect a history or metadata of any image made with content credentials. You can vary the amount of metadata stored and the level of editing information. This also works with images from the Adobe Stock library.

Illustrator Interop

It is now possible to copy from Illustrator and paste vector shapes such as rectangles, polygons, circles, lines, and compound paths, all while maintaining editable attributes into Photoshop. Adobe supports fill, stroke, blend mode, and opacity.

There is also support for compound paths, shapes, using Pathfinder, and clipping masks. Additionally, groups and layers that are pasted into Photoshop are made to match as closely as possible what they looked like when authored in Illustrator. If Photoshop can’t maintain the editability of something from Illustrator, it at least tries to paste it to match the original visual fidelity seen in Illustrator.

Improved Color Management and HDR Capabilities

Photoshop now supports Apple’s Pro Display XDR in full, high dynamic range. Such as the newly released Macbook Pro 14” and 16” M1s feature XDR displays (up to 1600 nits peak) in addition to the large Apple Pro Display XDR. This new feature helps artists see the full richness of colors and works well with the new iPhone 13 Pro Max HDR features.

Premiere Pro

Remix

One of the new Sensei AI machine learning tools in Premiere Pro is Remix. This is ‘context-aware scale for audio’. If an audio track is too long for an edit, rather than having to edit it manually, a user can effectively scale the audio track and it will adjust the track while keeping the key audio features. While no one denies that major audio editing would be needed if a hero audio track required shortening, this is an invaluable tool for fast temp tracks and demo reel or guide audio.

HDR

Adobe had already recently introduced in Premiere Pro Auto Tone. This is a new technology for applying intelligent color corrections. Auto Tone adjustments are reflected in the Basic Correction sliders at the top of the Lumetri panel, and users can easily fine-tune the results. It acts as a guide to help new content creators become familiar with adjustments available to improve color in their video, or a jumping-off point for experienced users to fast track their color correction.  Auto Tone will replace the current Auto adjustment button in the Lumetri panel, providing more sophisticated color correction, with better results, with a single click.

Max Tech Sneak Peaks

Here are some of the highlights from the Tech previews. These are not products but rather a series of ML tech demos showing the R&D efforts going on inside Adobe.

Morpheus is a video editing tech extension from the Photoshop neural filter that only worked on a still. It is powered by Adobe Sensei. Project Morpheus uses machine learning to automate frame level changes into smooth, consistent results to faces.

Artful Frames

Artful frames does a similar thing conceptually, allowing Style Transfer onto video clips, making it very easy to produce your own A-HA Take on Me style video look or that of an impressionist painting.

Strike a Pose

Strike a Pose allows anyone to adjust a pose. By providing a reference image of a person in the desired pose, you can leverage machine learning to reposition the person in an image into the same stance. Through a unique mix of data points and texture mapping, Project Strike a Pose is able to replicate elements like clothing, hair, and skin color to match the source image, while still accounting for factors like depth and lighting.

Project On Point

Unlike the Strike a Pose demo, The On Point demo allows users to craft a body skeleton and have a visual search engine find images that contain people in those poses. Rather than searching on Metadate or tags, it is searching on body positions.

Project Make it Pop

Make It Pop identifies parts of an image (background, foreground, body parts, etc.) and converts them to vector shapes. From there a user can choose from a gallery of looks and animations to apply to the image, transforming a picture into a graphic version of the original. Given just how much work is required to accurately roto a person from a shot in Illustrator this is extremely useful and maybe a path to Adobe exploring actual vector roto generation from video in the future. The significance is that a current pixel-based object segmentation solution may have temporal artifacts, and it is also uneditable, but it would be remarkable in the future to have a spline or vector auto-roto solution in AE or Premiere Pro.

Leave a Reply

Your email address will not be published. Required fields are marked *