I am a technical director, artist, and 3D generalist. I thrive on problem solving and finding solutions to weird problems. Over the years difficult creative problems have required me to simultaneously act as artist, researcher, and developer; utilizing technical proficiency guided by an artistic eye. I seldom work alone and comfortably working within a team having recetly taken on leadership roles. My history as an educator and technical director has helped foster an ability to communicate well with a team, teach new processes and tools, build documentation, and listen to and support the needs of producers, project managers, and artists. I take pride in flexibility and being able to assume a wide range of technical and creative roles.
Houdini Pipeline Development
While freelancing for Scholar I had the pleasure of building a collection of HDAs and tools to integrate Houdini into their pipeline.
Scholar Publish ROP HDA
Design: The idea behind this HDA was to allow artists to publish any number of assets, in a range of formats, from within any SOP network, and enforce proper naming, path, and format conventions appropriate for their pipeline.
Simple creation: Artists are able to drop a Scholar Publish ROP, pick a name for the asset, and pick an output format.
Dynamic parameter interface allows publishing simulations, sequenced frames, single time-dependent frames, single files, and binary/ascii output.
Scholar Publish ROPs allow temporary disabling of publishing from bulk publishing.
All notes entered for each publish ROP are saved alongside publish files.
Batch publishing is supported in both Houdini and command line.
Publishes support shot and asset conventions.
Scholar Publish ROPs supported publish formats: Alembic, FBX, FBX (rigged and animated sequences: KineFX input), bgeo, vdb, & usd (crate and ascii).
Each Scholar Publish ROP stores custom asset information and meta data as json within a custom multistring parameter.
The ROP handles automatic versioning upon publish.
Design: Houdini can allow importing of a wide range of formats from any point with networks and subnetworks within SOPs. This can prove to be an organizational nightmare if artists are allowed to organize their hip files as they see fit. I developed an asset import workflow that would collect all pipeline imports at the root of the object (SOPs) context. Importing an asset has its own geometry node with a custom “Scholar” properties tab prepended to the geometry node. The artist would then use the output nulls within these geometry nodes to merge the imported assets into whatever network they need. This design enforces a consistent location to view and track all imported pipeline assets.
Automatic network creation to properly load, process, separate, and create output nodes for artists to use.
Each asset node will automatically search the project to identify all usable published versions.
Choosing an asset version is as easy as selecting the desired version from the auto-populated dropdown.
QT/PySide Publish & Asset Managers
Design: It is necessary to allow an artist to create, configure, and manage publishing and asset import on an individual asset-by-asset basis within the network view. However, artists also need to be able to manage all publishing, asset importing, and asset versioning from within simple manager palettes. To accompany the individual publish and asset nodes I created publish and asset managers.
Publish Manager Features:
List all publish ROPs within a single table.
Enable/Disable publishing of any ROPs from the manager.
Selecting a Publish ROP from within the manger will navigate to, zoom, and center that node in the node graph.
An artist is able to trigger a batch publishing of all enabled publish ROPs from within the manager.
Asset Manager Features:
List all imported assets within a single table.
Select individual asset versions or batch update all asset the their newest available version.
Identify out-of-date assets.
User Friendly Software Tools
While at Clutch Studios I developed and maintained a Flask server within the studio to address a number of important issues:
Python is notoriously difficult to package as a standalone application that can easily be shared. Flask has allowed python developers in the studio to continue to develop tools as they had before and allows a relatively easy method for creating GUIs with an extremely fast turnaround.
Distribution & access
Python tools build with Qt or Electron proved to be difficult or inefficient to push to coworkers due to the need to build and target multiple platforms. Moving to a web application approach distribution was no longer an issue. As long as you were on the local network or were connected via VPN you had access to the latest tools with zero installation. Additionally, a web interface provides a method of access both technical and non-technical individuals would be familiar with. This allowed the entire tool stack to be utilized by the entire studio and not just those familiar with complicated software packages.
Software access via RESTful APIs
Other tools commonly used throughout the studio (Maya, Houdini, etc.) could easily make use of the functionality of the Flask applications via a REST API.
Build off of existing Python knowledge
Within Clutch Python was the language everyone knew. I dislike having silos of knowledge and wanted to allow the handful of Python developers within the studio to easily develop tools and make them accessible. I was able to mentor our Python developers on Flask development and building web front ends allowing new tools to be built by more than just one individual.
Flexible & accessible data storage
I chose to use a MongoDB database as the database for the Flask server. Changing client requests, long term projects (years long), and a constant need to start projects before complete information had been handed over to us put us in a position where a database with a rigid schema to be cumbersome and far too rigid. MongoDB afforded us the flexibility required by Clutch's projects.
Clutch's clients had demanding rendering requirements. Often we'd need to deliver a large number of photo-real images at resolutions ready for print, animations, or a high volume of images demonstrating various combinations of products. Nearly everything rendered had CAD processed at an extremely high level of detail so poly counts were always high. These demands long ago exceeded our ability to render on artist’s local machines. So, over the years, in order to quickly meet the needs of Clutch's clients I was able to improve upon and expand the rendering capacity of the studio both locally and into the cloud.
I developed for and maintained a local (Mac & Windows) render farm of 41 computers.
Using AWS I extended our rendering capabilities into the cloud allowing us to render using up to 200 instances. We made use of both CPU and GPU instance types allowing us to take advantage of a number of rendering packages.
Once we had moved a significant portion of our rendering to the cloud we found bandwidth was our largest render speed bottlenecks. I participated in setting the studio up using Amazon Direct Connect giving us a significant boost in render dependability and speed.
AWS Thinkbox Deadline
As our demand to process more CAD grew it was clear we needed to distribute the processing of CAD across the farm. I developed a custom Deadline plugin allowing Clutch artists to distribute the conversion of CAD to polygon meshes at multiple levels of quality.
I developed tools that automated the submission of hundreds of Maya jobs at a time with properly set render layers and outputs.
ACES / OCIO
Both local and cloud render environments were capable of rendering in ACEScg with all of our rendering software.
International Truck’s new MV-Series of medium duty trucks needed a debut that showcased their versatility and excited NTEA attendees. But confined to the limited space of a crowded convention hall, bringing in several trucks outfitted for different applications wasn’t an option. I worked closely with International to develop an interactive experience for the launch of their new vehicle in an extremely short amount of time. My responsibilities included:
C# - I was responsible for the development of all interactive coding in Unity, real-time shader/material creation, lighting, and real-time render optimization.
Creative collaboration - I collaborated closely with creative in the design of the interactive experience.
Lead a small team of artists responsible for preparing CAD assets for real-time and construction of the virtual environment.
Studio Migration to ACES/OCIO Workflow
I was responsible for moving the entire 3D department over to an ACES color workflow. Clutch's focus over the years has been has almost entirely consisted of highly reflective surfaces (chrome, paint, glass, etc.) rendered with an eye towards photo-realism. The high dynamic range of ACEScg lent a level of realism that was well worth the trouble of transitioning the CG / compositing pipelines over to ACES/OCIO.
ACES and OCIO are fairly new additions to small CG/VFX studios and no one on the 3D team was familiar with the technology or methods of working. It was my responsibility to educate the team on the conceptual and technical aspects of using ACES. In support of this it was necessary to create and provide learning material for the team.
A suite of tools needed to be developed in Python to automate colorspace conversion, simplify workflow for cg artists, and intelligently migrate old assets forward to work in the new color management process. Majority of these tools were developed as part of the Flask server and had web front ends allowing artists/employees lacking in-depth knowledge to participate in processing assets.
3D & Compositing Development
While at Clutch a tremendous amount of time was put into the development of the CG pipeline and supporting the CG artists with various software tools. I'm very familiar with the APIs specific to Maya, Houdini, Modo, and Nuke. There were times when the tools were simple enough to only need running in a script window. Other times they may have necessitated creating an interface in Qt using PySide2.
Houdini App Engine
Solaris & USD (early research stages)
Kit & Plugin Development
The 3D asset database at Clutch has gone through a number of iterations over the years. The final iteration was stored in a MongoDB document and had a suite of tools that cataloged every asset as either a Maya scene file (.ma) or a Houdini .bgeo file. Each asset was associated with a client or generic Clutch asset and had categories and tags that would help with search and listing within CCDs. In the case of Harley-Davidson or International Trucks a system of hooks and anchors was used to connect components assemble them into a complete vehicle. This hooks and anchors system was described and stored with each asset as well. Other client and project specific meta data was stored in this database as well. The Flask server provided a number of interfaces allowing a user to manually add or make changes to the database as well as provide APIs for other CCDs to add, remove, modify, and delete data as was needed as well as display assets within other applications.
Image Processing & Delivery Tools
At Clutch there were a number of circumstances that necessitated the creation of a set of image processing tools:
Clients would constantly hand over images that were all over the place as far as format, colorspace, and resolution.
For more than one client we would need to have very specific metadata added to the files before delivery.
We constantly had to deliver images in a number of colorspaces (CMYK, ACEScg, sRGB, Adobe1998, ...).
Images needed to be tagged to prevent color shifting due to improper colorspace assignment in whatever weird viewing app a client chose to view images in.
Because circumstances like these I wrote a handful of image processing and reporting apps on the Flask server that everyone used when receiving images from clients, handing images off between departments, and delivering images to clients.
colorspace conversion and image format changing (can recursively search through folders for images)
metadata embedding app - employees (usually project managers) could provide and excel document or a json file to be embedded into specific IPTC and EXIF metadata fields.
embedding color profiles (tagging) into delivery images
image reporting - often I would be given a folder of thousands of images of various resolutions and colorspaces and have to find which ones were incorrect. So, I developed a tool that could recursively search through folders and find every image within and pull out it's size (ppi), resolution, colorspace, metadata, format, and channels and save that data to a csv so project managers could easily find problematic images without needing to code or endlessly sift though them by hand.
ACES batch conversion - we constantly had to convert images from Rec. 709 and sRGB to ACEScg. But, depending on use case, they would need a different color transform for proper conversion. We started heavily using an app I developed which would find images and search for key terms in the name (bmp, disp, normal, etc.) to perform the correct conversion.