Top

The Pixel Farm PFTrack v2017

Review:

   

Downloads

Download demos and trial versions, product documentation, free presets and more.

You May Download a Free Demo After Logging In!

3D Tracking Reinvented

Tracking professionals throughout the VFX industry regard PFTrack as the most innovative product of its kind. With unparalleled flexibility, functionality and precision, it is the go to matchmoving and layout software for a reason: It does what the competition can’t.


Highlights

Building on a rock-solid camera tracking and image analysis engine, PFTrack adds exclusive technologies that stretch far beyond the the capabilities of other conventional matchmoving software, empowering visual artists to recreate entire scenes within an accurate world space defined by camera metrics.

Every VFX pipeline will benefit from data that only PFTrack can deliver, meaning creative possibilities are limitless, and accessible within a single working environment that inspires imagination.

2017 Features Overview

Integrated PFDepth

PFTrack just got a whole lot deeper. Create and generate depth maps automatically from realworld camera metrics, save huge amounts of time with some of the industry’s most advanced rotoscoping tools, convert any 2D footage into 3D and benefit from the finest stereo pipeline commercially available. PFDepth is now completely integrated into the PFTrack Tree. Enjoy.

 

Enhanced UI for enhanced productivity

Store effects trees as presets and use them in other projects or share them with your colleagues. Nodes organised into logical groups. Store the most commonly used nodes in custom folders for quick retrieval. PFTrack just keeps getting better and better.

 

Automated multi-layer texture extraction

How do you improve the finest embedded photogrammetry tools in the industry? Simple. Add the ability to generate Normal, Displacement, Occlusion and Colour maps simply and quickly. Combined with our recently released Mesh Simplification Tools PFTrack 2017 now packs the ultimate asset building toolset for Games, VR, and VFX.

 

Expanded support for Digital Cinematography Cameras

Import and debayer Arri RAW directly in the tree. Automatically extract embedded camera metadata from RED and ARRI, and lens metadata from Cooke Optics. All available directly within the PFTrack 2017 tree and python scripting.

 

The magic RGBD sensor

It’s new, and it’s set to become an industry game changer. Depth sensing cameras are in their infancy, but we wanted to give PFTrack users early access to some experimental R&D. Check out the movie for a glimpse into of the future of VFX and VR asset capture, done the PFWay.


Key PFTrack Features Include

pixel farm pftrack photogrammetryPhotogrammetry Embedded

Class leading photogrammetry toolset for building accurate geometry, but with that PFTrack Twist.

 

pixel farm pftrack performance capturePerformance Capture

Multi camera motion capture, geometry tracking and object tracking. If it moves independent of the camera, we’ve got it covered.

 

pixel farm pftrack layoutLayout

Accurately recreate the world space, camera, and object movements of the original live action scene, and rebuild the natural environment where CG characters will exist.

 

pixel farm pftrack survey solverSurvey Solver

With direct support for raw LIDAR scans, and the unique photosurvey node, PFTrack makes quick work of immense survey data packages. Absolute precision doesn’t bring your productivity to a halt.

 

pixel farm pftrack 3d camera tracking3D Camera Tracking

Easy to use, accurate, fast and ahead of the curve. PFTrack’s matchmove core gets your job done with class leading results.

 

Spherical Tracking Toolset

Your fast track to creating incredible 360° content for VFX, VR, games and more. This toolset comprises of three brand new nodes: Spherical Track, Spherical Orient and Spherical Render.

  • The Spherical Track Node - A fast, streamlined solution for tracking and solving the full equirectangular image from 360° cameras. The node tracks six cameras at once from the equirectangular panorama, all aligned and contained in the same scene, which are then passed downstream through the node tree, opening up the full potential of PFTrack’s renowned feature set.
  • The Spherical Orient Node - Stabilise the camera rotation of your spherical track, as well as re-orient your panorama and focus your front view on animatable look-at points.
  • The Spherical Render Node - Finally render your new equirectangular panorama, which used alongside tracking data from the six cameras, offers VFX artists a fully equipped spherical toolset for 360° content creation at their fingertips.

 

PF Depth

Double The Technology, Galactic Results

The fusion of PFTrack and PFDepth. Recreate accurate world space and take your VFX pipeline to new dimensions. Now integrated with PFTrack.

Dimensionalisation professionals turn to PFDepth for one reason, to work within a true 3D world space using real-world camera models. This seemingly simple concept is what allows PFDepth to create extremely accurate, per-pixel 2D to 3D conversion, depth maps for deep compositing and fix problems with a stereo image pair that may have happened back when the scene was originally shot. The result is total creative control, without the guesswork.

And with a state-of-the-art review environment that checks for comfort violations prior to rendering, PFDepth ensures breathtaking 3D, without the headaches.

Key PFDepth Features Include

pixel farm pfdepth 3d2D to 3D Conversion

Automated Zdepth tools that make dimensionalisation as easy as x,y,z and a QA toolset that guarantees correct results.

 

pixel farm pfdepth stereoscopic correctionStereoscopic Correction

Our renowned camera tracking technologies guarantee absolute precision, even if your camera rig doesn’t.

 

pixel farm pfdepth real depthReal Depth

Deep Compositing technologies ensure PFDepth fits your pipeline where you choose.

 

 

 

 

What's new in version 2017?

All functionality of PFDepth embedded in PFTrack

All PFDepth nodes are now fully integrated and available in PFTrack

  • Many more ways to create and manipulate depth maps:
    • Updated Z-Depth Solver node
    • Z-Depth Tracker, Merge, Edit, Filter, Composite and Cache nodes
    • Z-Depth Object node
    • Rotoscope-based depth editing
    • Ideal tool to prepare clips for z-based compositing
  • Extended stereo camera and image pipeline:
    • Build Stereo Camera node to automatically position the right-eye camera after tracking the left-eye
    • Stereo Disparity Solver, Disparity Adjust and Disparity-to-Depth conversion nodes
    • Fix common issues such as stereo keystone alignment and left/right-eye colour and focus mismatches
    • Render left and right-eye images from a single clip using Z-Depth data

User Interface Updates and Productivity Enhancements

  • Node creation panel has been updated with nodes organised into groups to make them easier to find
  • New Custom node group, where commonly used nodes can be placed for quick access
  • Tree layouts can be saved as XML preset files to help quickly construct common sets of nodes
  • Tree preset XML files can be copied onto other machines or given to users to share common layouts

Extended Digital Cinematography Camera Support

  • Added support for reading ARRI RAW media files
  • Camera and lens metadata is automatically read from RED and ARRI source files
  • ARRI metadata can also be read from DPX, OpenEXR or Quicktime ProRes files
  • Added support for importing custom XML metadata to the Clip Input node
  • All metadata is passed through the tree and can be accessed by python or export nodes

Advanced Photogrammetry Texture Extraction Tools

  • An optimized texture map can now be created automatically in the Photo Mesh node as part of the simplification process
  • Exposure and brightness differences in the source media can be automatically corrected to provide the best quality texture map
  • Exposure balanced images are automatically passed down-stream, and can be used in the Texture Extraction node for manual texture painting if required
  • Normal, displacement and occlusion maps can also be generated during simplification, to ensure the simplified mesh retains as much visual fidelity as possible
    • Normal maps support both world and Mikk tangent spaces
    • Occlusion maps can be generated for either the sky or local surface occlusion
    • Additional texture maps are exported automatically by the Export node

Experimental RGBD Pipeline for Depth Sensors

  • Z-Depth data captured by external sensors can be attached to an RGB clip and passed down the tracking tree
  • Auto Track and User Track nodes updated to read z-depth values for trackers at each frame
  • Camera Solver node will use tracker z-depth values to help solve for camera motion
    • Can reduce drift in long shots
    • Can improve accuracy when tracking complicated camera movements
    • Provides 3D data for nodal pans
    • Provides a real-world scale without any additional steps
  • Z-Depth Mesh node can be used to convert depth maps into a coloured triangular mesh

An iOS application will be released during 2017 allowing depth data to be recorded using an iPad and the Occipital Structure Sensor capture device.

Additional Improvements and Features

  • Documentation updates and improvements
  • Added camera presets for ARRI ALEXA and RED cameras
  • Improved initial keyframe selection algorithm in Camera Solver node
  • Improved Auto Track feature tracking when using undistorted image plates containing blank edge areas
  • Custom XML metadata import can be used to define per-frame lens distortion, focal length and camera pose values
  • Cooke /i Data file import has been moved from the Edit Camera node into the Clip Input node
  • Added support for importing .PTS and FARO .XYB LIDAR files in the Survey Solver node
  • Improved pivot-point handling in the Survey Solver node when LIDAR datasets contain stray points far from the scene
  • Added an XML export python script to store camera data using the same custom XML schema as supported by the Clip Input node
  • Added depth-test and back-face culling options to the Geometry Track node to help painting vertex weights and deformable groups on complex geometric models
  • Added a focal length reset button to Camera Solver and Survey Solver nodes that can be used to reset the solved focal length to incoming value if it has been set by another node up stream
  • Added exposure and image processing controls to the Clip Input node for control of ARRI RAW and OpenEXR decoding

Updates

What's new in version 2016?

Spherical Tracking Toolset

This toolset comprises of three brand new nodes: Spherical Track, Spherical Orient and Spherical Render.

New Photo Mesh Simplification Tools

Adds the option to simplify a mesh model which allows you to pass a mesh downstream for further processing in PFTrack, and export the simplified model as low resolution meshes for game assets and VR environments.

Texturing Pipeline

By applying the Photo Mesh simplification tool, a mesh can then be passed directly to the Texture Extraction node in order to generate texture maps. With this added functionality, all the essential steps of your modelling pipeline are now available directly inside of PFTrack.

 

Supported Operating Systems

  • Microsoft Windows Windows 7 and 8 (only 64-bit supported)
  • Mac Mac® OS X 10.10 or later (only 64-bit supported)
  • Linux CentOS 6.0 or later (only 64-bit supported)

Minimum Supported Hardware

  • Intel Core™2 Duo processor or higher
  • 4GB RAM
  • 2GB available hard disk space for full content installation
  • Mouse or pointing device
  • 1920×1080 resolution monitor
  • 1GB dedicated GPU with accelerated graphics (2GB recommended)
  • OpenCL 1.2 or later / CUDA Compute Capability 3.0 or later †

† OpenCL is used exclusively on OS X. On Windows and Linux, CUDA is used for NVIDIA GPUs, and OpenCL for all other cards.
Optimal Processing

The following are recommended for optimal processing.

  • A fast (300MB/s or more) storage device. Required for optimal handling of 2K resolution footage and higher.
  • A CPU with at least 4 cores is recommended for optimal UI performance.
  • Increased RAM for data caching. Clips stored in RAM are processed far more quickly.
  • 2GB GPU is required for medium and high resolution depth map creation for photo meshing.

Licensing

Please note: a direct internet connection is required to install and administer the licenses using the license manager. Otherwise, licenses must be managed manually

Licenses are node-locked unless a customer purchases the Floating License Server/Manager.

Log in to leave your own rating.

Customer Reviews

Sorry, no ratings have been submitted for this entry yet.

Texture Map Extraction in PFTrack 2017

Learn how to use the the new Texture Map Extraction features now available in PFTrack 2017, to create a simplified model...

More...