Giter Site home page Giter Site logo

core's Introduction

The safe post-production pipeline

Build Status Coverage Status


Testing

$ cd avalon-core

# One-time build and database
$ docker run --name avalon-mongo -d mongo
$ . build_docker.sh

# Run
$ . test_docker.sh

Contributing

See CONTRIBUTING.md for a summary of guidelines for how you can contribute to Avalon.

Also visit the chat for a direct connection with current developers.


Code convention

Below are some of the standard practices applied to this repositories.

  • Etiquette: PEP8
    • All code is written in PEP8. It is recommended you use a linter as you work, flake8 and pylinter are both good options.
  • Etiquette: Napoleon docstrings
    • Any docstrings are made in Google Napoleon format. See Napoleon for details.
  • Etiquette: Semantic Versioning
  • Etiquette: Underscore means private
    • Anything prefixed with an underscore means that it is internal to wherever it is used. For example, a variable name is only ever used in the parent function or class. A module is not for use by the end-user. In contrast, anything without an underscore is public, but not necessarily part of the API. Members of the API resides in api.py.
  • API: Idempotence
    • A public function must be able to be called twice and produce the exact same result. This means no changing of state without restoring previous state when finishing. For example, if a function requires changing the current selection in Autodesk Maya, it must restore the previous selection prior to completing.

core's People

Contributors

aardschok avatar antirotor avatar bigroy avatar davidlatwe avatar icyvapor avatar illicitit avatar jakubjezek001 avatar jasperges avatar legacyid1991 avatar mkolar avatar mottosso avatar tokejepsen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

core's Issues

Have clone special tool connect visibility

On the current rig where we have different hair styles, noes, etc that is connected with visibility driven by an attribute on the rig,
e.g. Attribute Woman01 has a setdriven connection to visibility to a group containing the meshes for _Woman01, groups for other variations like Man01 are turned off.

The issue we noticed is that the out mesh created with the "clone special" tool does not get these visibility connection from the rigged mesh so alembics will not record these changes when exporting.

Update We need a a new Clone (special) tool that connects the visibility but also to keep the old one since we don't want visibility to be connected from the model reference to the rigged mesh only from the rigged mesh to the Out mesh.

Level Editor

Goal

Quickly dress and visualise dressing in shots.

Implementation

For each published model, also publish a top-down, orthographic snapshot along with physical measurements for later use in an level editor similar to 2D games.

Models are positioned on a 2D grid and exported as name: transformation, where name references the laid out asset and transformation contains the position, rotation and scale of the model.

The resulting editor would allow for quickly flipping through shots like flicking through pictures on your computer, and make edits on the fly that in one way or another integrates back into the final shot.

Normalise references to external paths

Goal

Enable a flexible project root.

Rather than references, textures and caches being stored as absolute paths, utilise Maya's path resolver to remove the need for absolute paths.

Before

m:\projects\projectA\assets\Bruce\rig.ma

After

$PROJECTDIR\assets\Bruce\rig.ma

This way, a project can reside anywhere on the artists machine, on any drive, in any subfolder.

Implementation

On scene save, a nodes of a specific set of types are scanned for paths. Anything with the PROJECTDIR variable in it is automatically replaced. The types are references, files and alembic caches. Anything with support for automatic resolving.

Light Rig

For the lighting department:

Aim:
New Family to add, not yet decided but for now called light rig or light-rig
New Family for the asset creator
New Family for the asset loader

The idea is to implement a workflow for lighters and render artists so that they can export lighting setups called light-rigs so that the same setup can be imported and used across multiple shots of a project and also saved for later use such as turntables or demos.
capture150
capture151_edit

What is needed.

  • add family
  • add new maya "set"
  • add scheme update
  • research if lights for different render engines will become a issue for instance collector
  • add new collector
  • add new validators
  • add new extractors
  • add functions if needed

Digration.
This is for now meant to be only on maya hosts and not all apps. (Reasons being that maya is the only app we use today for lighting) If this can be created to be future proof and to work across different hosts then that would be great.
Prio is for now to get it to work for Octane first, followd by Redshift, then mental ray

Can mindbender ID validator be optional?

Animatic artist want to know if we can make some of the validators that checks not so important stuff can be optional, when they work, they create a lot of "dummy geo" that they want to be exported so that animators have something to start with.

Creator Templates

Goal

Reduce set-up time of new assets.

Motivation

All assets have some level of boilerplate associated with it, such as all residing within an assembly called ROOT. Some assets come with more boilerplate, such as mindbender.rig.

We can reduce the time taken to set-up each asset, minimise potential validation errors and further guarantee the uniformity of all assets by automatically providing the user with necessary hierarchy and content per family of asset.

Selective Sync

Goal

Browse and download assets from any location.

Implementation

The loader displays local and remote assets, offering to "Make local". This would involve coupling SyncThing with the loader, along with somehow enabling one user to fetch available assets from another location.

The connection could be bi-directional, in that one freelancer may want to browse assets at Mindbender in Gothenburg, whereas another studio may be producing assets that Mindbender may be interested in. In this way, assets can be stored and shared from and to any location.

Records of available assets will initially be stored at Mindbender Gothenburg, later to potentially be delegated to a remote man-in-the-middle, such as Firebase.

mb.bat Presets

Goal

Reduce the number of steps required for an artist to start working on an asset.

Implementation

Pre-define project and shot at startup.

$ mb-projectA-shot020-animation.bat
Launching Maya..

mbID on shapes

Goal

Improve logic of .mbID.

Implementation

Transition the mbID attribute from residing on nodes of type transform to nodes of type mesh.

Motivation

Id's are currently only used for mesh related operations, such as assigning shaders, connecting attributes, but the attribute is stored on the transform. This has several drawbacks.

  1. When connecting mesh A to mesh B, if the transform of an ID has two shapes, which do we use?
  2. When shading one shape differently from another, which do we apply a published shader to?

Additionally, on a technical note Houdini seems to only accept custom attributes via Alembic on shapes.

At the moment we work around it by validating that each transform only has a single mesh, but it's flawed and hacks are starting to manifest themselves throughout the pipeline.

Alembic Exporter is not exporting visibility

The exporter does not export visibility animation on the meshes.
This is pretty urgent on the current project since we are using one rig and are just animating visibility on Meshes to turn them on and off to create 8 different characters.

Fixed Task Names

Goal

Reduce errors in spelling of tasks - e.g. modeling versus modling and enable custom functionality relative a given task - e.g. a special "modeling" menu for when modeling and a "lookdev" button for those performing lookdev.

Implementation

One option might be to produce a .bat executable per task, and call upon these when starting an application.

$ modeling maya
Launching Maya...
$ lookdev katana
Launching Katana..
$ modling maya
ERROR: Unrecognised ocmmand "modling"

Analytics

Goal

Gain insight into how assets are created and used, such that one can make educated guesses about how long future assets and, by extension, projects will take.

Motivation

Project management suites like Ftrack and Shotgun track human resources so as to enable extrapolation of said resources into realistic estimates on monetary and time cost for upcoming projects.

Data analytics does the same for hardware and software cost.

Spot bottlenecks ahead of time and gain an understanding of what causes or might cause file server slowdown. Understand how, when and where to split a directory tree into more and less performance counterparts.

Inspiration Firebase Analytics, Kibana, Sentry.

Implementation

With #25 and #31, we can start tracking each aspect of its use throughout the pipeline during a production. Creation of use of any asset can transparently be sent to tracking software to graph and otherwise visualise use of said data over time.

Container Interactions

Goal

Enable custom interactions between loaded assets.

Motivation

Some assets naturally interact - such as shaders and geometry. Other assets interact more subtly, such as a backpack rig and character rig. Currently shaders are automatically assigned to all loaded assets with matching IDs.

Implementation

Expose user-defined actions on selecting two or more containers in the Manager.

Examples

  • lookdev attaches to model
  • clothRig attaches to animRig

MINDBENDER_RESOLUTION_WIDTH not working

The environment variables set in the project and asset .bat files have no effect in Maya due to a bug in the userSetup.py.

The bug is that values are attempted to be set as string, rather than integers, causing Maya to throw an error at the first entry and then go no further.

All values read from the environment are string, so they must first be cast to integers in order to apply.

The script editor should have been showing errors like these.

# Error: setAttr: Error reading data element number 1: 1920
# Traceback (most recent call last):
#   File "<maya console>", line 1, in <module>
# RuntimeError: setAttr: Error reading data element number 1: 1920 # 

Include history and connections with lookdev assets

At the moment lookdev only publishes shadingGroups.

Octane has problem with sss and displace, it brakes each other. Solution rightnow is to apply an iDisplace, which is a mesh texture displace deformer (iDeform plugin).

This is the workflow at the moment:
-Import desired defaultMesh.
-CloneSpecial.
-Polysmooth all cloned meshes. (Deformer nr1, needed for iDisplace to have enough poly to displace as accurate as possible)
-iDisplace (deformer nr2)

Wish: Deformers that are assigned to specific mesh in lookdev scene also gets published and assigned when loaded into render scene assembly such as the materials.

Is this possible?

FYI: The order of Polysmooth and iDisplace are important. Also as a note is that is it possible to have one iDeform and connect it to several meshes, while the polysmooth is only per mesh assignable. It would of been nice to have one polymesh operation which has several meshes attached to it as well.

ls() service

Problem

ls() is the central mechanism to asset management at Mindbender. It works by (1) scanning the filesystem for assets and (2) delivering them as dictionaries of data as defined by their respective “schema”.

from mindbender import api

for asset in api.ls():
  print(asset["name"])

Each dictionary encapsulates all information about a given asset, including (1) versions, (2) subsets, (3) representations and ultimately (4) files.

The API is straightforward.

  1. api.register_root("/path/to/assets")
  2. Call ls() to read its contents.

At the moment, this reading of content occurs individually on each client - e.g. an artist computer. This means that (1) an artist requests a listing, (2) ls() is invoked and requests a listing via the OS, (3) the OS makes several (four) requests for a directory listing over a network per asset and (4) have the resulting data formatted as per their schema.

At 10 artists working simultaneously on small projects, even with the subpar network performance already present at Mindbender, the amount of listings could peak at ~2000/second (50 assets 10 artists 4 levels), but the bottleneck is likely still not disk performance and likely need not be of concern.

However, as crew and projects get larger, and as additional strains are added to the filesystem (e.g. sync), the impact on performance may start to become noticeable.


Solution

In this case, I have a suggestion for one achievable method that may alleviate this burden up to an estimated 100+ artists and feature film sized projects.

It consists of (1) a client and (2) a server.

A dedicated computer hosts an “ls service”. This service takes requests for asset listings and reports back.
ls() is retargeted to make requests to this service, as opposed to querying the networked filesystem directly.

The service may then (1) reside directly on the computer closest to the filesystem (in this case the NAS) and (2), most crucially, cache prior requests for listings.

The caching mechanism can in the simplest case hand pre-made results back without a roundtrip to the filesystem and update the cache only at a given interval - such as every 10-30 seconds.

This means the filesystem, no matter how many requests are made, are never bothered more than once every 10-30 seconds.

The immediate side-effect of this caching, as caching of any kind, is data going stale. That is, the artist not receiving the latest information when it was expected. As a solution to this, the caching mechanism could be integrated with publishing or the native journalling mechanism of the operating system, such that it rarely ever has to perform a “full scan” but instead mostly updates parts that change.

For example, in a given cache made 30 seconds ago, a new asset is published and makes a request to the ls service, saying “Hey, I just added ‘AssetA’ to this location”. The caching mechanism then updates it’s internal cache, without having to roundtrip to the filesystem.


Remote

Once the above service is implemented, an additional benefit arises. Namely that if local artists can query a single source of information for assets, so can remote artists.

Practically, anyone working remotely could send a request to the same server of a listing of assets. Once choosing an asset, his computer could then evaluate whether (1) it is already available locally or (2) whether to “make it available offline”.

Due to caching, requests made to the service would be virtually free, enabling both local and remote computer to poll it for updates. That is, whenever a new asset is made available, the artist could receive a notification - such as a balloon popup in the task bar.

LookDev transfer Error

Example move
Process to get to produce the bug:

  • Create a model for assetA.
  • Create a lookDev for assetA.
  • Create model assetB.
  • Create lookDev for assetB.
  • Load ModelA and Lookdev into a new scene
  • Load ModelB and Lookdev into same scene
  • Load Lookdev for ModelA again and it will get the ModelB lookDev and it can't be changed back with the loader.

Seems like its always the First models and lookDev that you load that transfers the shader to the wrong asset. As can be seen in the video even though I create a completely new scene the bug is still there.

Display Additional Data

Goal

Simplify making informed decisions.

Data

  • Author
  • Version
  • Date
  • Frames
  • Duration
  • Pre-roll
  • Post-roll

Inspiration

Colorbleed has got this going.

image

Disappearing mesh at render

  • Importing alembic and importing historylookdevDefault
  • Render looks good so far.
  • Selecting alembic reference node and then the lookdev reference node and using auto connect
  • Assets move to their place as expected but as render is invisible.

Not happening to all meshes. Have not yet found any difference on the disappearing assets.

Have a Set dressing family with both static models and rigged models

Goal

Simplify the creation of environments.

Requirements

An environment must support..

  • Static geometry
  • Pre-made animation cycles
  • Rigs to be animated later

Implementation

We need to facilitate creating an environment consisting of both static models and live rigs. The rigs are to be manipulated further by animators once an environment has been published.

Consider adding a new family, mindbender.set for Set Dressing, where an artist doing Set Dressing is referred to as Set Dresser.

Alternatively consider introducing mindbender.layout for publishing rigs as-is, to be further manipulated by animators. An environment artist could then produce both an environment, either as a model or set, and a layout for things he intends being manipulated further.

The layout family could then also be used for preparing animation scenes in general, such as placement of which characters where.


Motivation

Modified from version originally written by @CalleHalldin

The current workflow is split into two stages - sandboxing and refinement.

Sandbox

Models and camera are imported and placed. If a tree for example needs to be stump the Set Dresser will simple delete and deform to get the desired look. Assets may be represented as simple shapes, like a box or a sphere.

Refinement

For the shot in which the environment is meant to be used, we reference in the sandbox and use it as a reference for how the final Environment should look.

All assets that is needed has a final model so we simply load it though the loader with the shot camera and place them so they match the sandbox reference.

All models are cloned with the "clone (special)" tool so they can be tweaked if needed. For example the camera need to go low in this shot but its impossible without the ground intersecting with the camera so we simply push the ground down. Or we deform models to get a nicer image composition, for example move a few leafs that is in the way of the camera, deform the mountain in the background to get a specific shape for this camera angle.

All these cloned models are then added as a model named Environment for this specific shot thought the creator. Simply group them under a ROOT group and create a model Set named Environment.
We publish the camera as an alembic and and the Environment as a model for the shot.

The animator then load the camera and Environment model form inside the Shot asset.

Problem

This workflow becomes problematic when we have rigged assets that is a part of the environment and is needed to create a final image composition. We want the Set dressing artist to be able to use and place these rigged assets and be able to export these positions, ideally be able to pose the rigs if needed beyond the basic main control position.

Suggested Solution

This is not a solution since I don't know how to technically solve it.

We need a new family for Set Dressing that accepts models and controls_SET's of rigged assets.
Maybe if the pipeline finds a controls_SET as part of the instance it will export only the values of the control and any children below.

Maya's Atom

One way of doing this would be to use Maya's built in "Atom" exporter to just export the values and to export a small list what asset was used, which could be the namespaces of the controls_SET's?

Ideally when the animator loads a Set Dressing for a specific shot it would load the the static models, load the rigs used and import the values to get placement and pose.

Automatic Animation Instances

Another problem is that since we have rigs in the scene the pipeline automatically creates an Animation Set so the pipeline when we publish will also try to export the rigs as alembics.

I don't want to remove this since its useful for Animators since they are not tech savy, is there a way to get around this when we create a Set Dressing? or should the Set Dresser simply remove them before publish?

1-Based Versions

Goal

Simplify internal versioning logic.

Motivation

At the moment, versions start at 1 and increment upwards. But Python stores versions starting at 0.

versions[0]["name"] == 1

This is confusing.

Implementation

Access versions not via Python indexes, but via dictionary numbers.

# Before
versions = [
  "version1",
  "version2",
]

# After
versions = {
  "1": "version1",
  "2": "version2"
}

Container/Instance relationship

Goal

Associate an Instance with a Container, such that when a Container is deleted the Instance dies with it.

Implementation

In remove(), read custom attribute children of Container and remove contained members, only if the member has an equivalent parentContainer attribute with itself as value. This would enable a user to "disconnect" the relationship and not have their instances accidentally deleted.

Motivation

Loading a mindbender.rig automatically creates a mindbender.animation Instance. This instance should be deleted upon deleting the original rig.

Container Manager doesn't change version on loaded camera

"Kameran kan inte upgrade/downgrade
kamera
Kamera exporterades inte med animation fören jag slog på "shake", Ska undersöka lite mer på denna."

Process to get to bug:
When working with the container manager

  • Open new scene with already published assets
  • Load the shot camera
  • open the container manager
  • cameras versions are visible
  • camera version cannot be changed neither up nor down

Publish source file using native format

Goal

Enable publish of source file from Maya with unknown nodes present.

Motivation

At the moment, source files are always exported as mayaAscii, but when the artist has previously saved his scene using mayaBinary then the conversion of unknown nodes cannot be made and the publish fails.

Implementation

Match the source file with the format in which the artist has saved his file.

Rigs with shapes in out_SET

Problem

The out_SET defines which meshes are to be exported as Alembic. If there are shapes in there, the Alembic exported will mistakenly attempt to unparent the shape from its transform, so as to eliminate any parent hierarchy.

When the member is a shape, an error is thrown.

from maya import cmds
from mindbender import api, maya
from pyblish import api, util

cmds.file(new=True, force=True)

transform, generator = cmds.polyCube(name="body_PLY")
shape = cmds.listRelatives(transform, shapes=True)[0]

cmds.select(shape, replace=True)
instance = maya.create("Bruce01", family="mindbender.animation", options={"useSelection": True})

cmds.file(rename="temp.ma")
cmds.file(save=True)

context = api.Context()

util.collect(context)
util.extract(context)

The result is..

util.extract(context)
# pyblish.extract_animation.ExtractMindbenderAnimation : Loading plug-in.. # 
# pyblish.extract_animation.ExtractMindbenderAnimation : Extracting animation.. # 
lib.export_alembic('AbcExport -j "-file ../Bruce01/20170121T192149Z/Bruce01.abc -frameRange 1.0 48.0 -root body_PLYShape -attrPrefix mb -uvWrite "')
// Error: Can't translate body_PLYShape since it doesn't have a parent. // 

Solution

I see two potential solutions; (1) implement logic in the alembic exported to realise it cannot unparent shapes, and automatically assume the user meant the parent instead or (2) validate against including shapes in a rig's out_SET.

_mkproject.bat bad help

Problem

The help text for _mkproject.bat says the wrong thing, if the project name includes a subdirectory.

MEINDBENDER START ---------------------------


                _,--,
             .-'---./_    __
            /o \\     '-.' /
            \  //    _.-'._\
             `"\)--"`


  Welcome marcus!

  1. Type first characters of a project, e.g. "p999_"
  2. Press TAB to cycle through matching projects

 ---------------------------------------------

$ p999_Meindbender_Sync
 Uh oh..
   Specify either "assets" or "film"

 Example:
   $ p999_Meindbender_Sync\f02_prod assets
   $ p999_Meindbender_Sync\f02_prod film

$

This line...

   $ p999_Meindbender_Sync\f02_prod assets

Should be..

   $ p999_Meindbender_Sync assets

The reason is because the project name is..

call _mkproject %~dp0 %~n0\f02_prod %1

It includes f02_prod in the name.

Implementation

To fix this, we'll need to strip characters after \ in the name in _mkproject.bat.

Single Loader Per Family

Goal

Simplify simplify simplify.

Motivation

At the moment, loaders are identical to Pyblish plug-ins. Meaning each asset has a corresponding family and the family determines which of the registered loaders to run.

So far, there hasn't been a need for more than one loader, and in fact multiple loaders complicate logic and understanding. For example, should each loader create its own Container, and if so what does that mean? Loading an asset, like an animation cache, should ultimately end up in a single container, representative of the loaded animation. Currently, the animation could end up in multiple containers, what does that mean? What does each container represent?

Implementation

One loader one family.

Organizing Work folders

Goal

Simplify finding the latest working file.

Motivation

Right now on the current project you constantly have to check all "working folders" - such as calle/maya/, arvid/maya/ and emre/maya - to find the latest file which becomes time consuming.

It is a cause of the kind of mistakes where you accidentally open up a version that you think is the latest but isn't.

At our company we share a lot of files between artists so it might be a better idea to group them altogether and treat the company as one big user.

Implementation

Remove the user/ directory from the work/ directory.

# Before
Bruce/work/modeling/calle/maya/...

# After
Bruce/work/modeling/maya/...

Users would then be responsible for keeping up with a naming convention for their scene files that reflect their intentions and can no longer store files that are personal to themselves in the work directory.

wrong folders and ignoring computer name!

The issue is apparently project specific, on one of the projects when I have navigated and opened ex. maya lookdev and then I open a saved file it looks for \assets\cat\publish\modelDefault and not \f02_prod\assets\cat\publish\modelDefault. It somehow jumps the \f02_prod

And on another project it started to skip computer name, instead of \work\lookdev\TARS\maya it maps like this \work\lookdev\maya

I have tried on other computers. And restarting both the mb-dev.bat and the computer but still the same issue.

extract_animation.py exports wrong framerange

Problem:

Seams that the extract_animation.py doesn't export the correct frame range

This bit of code doesn't seam to generate the correct frame range.

frame_range=(instance.data["startFrame"],
             instance.data["endFrame"])

Solution

The code could be changed to this:

frame_range=(int(cmds.playbackOptions(query=True, animationStartTime=True)),
             int(cmds.playbackOptions(query=True, animationEndTime=True)))

Because that info gets checked by another validator before this code executes.

Maybe there is a better or Cooler way of doing it.
What do you think ?

Pyblish still thinks deleted container manager is up for publishing

"Managern deletar referensen i reference editorn men referensen finns fortfarnade i managern och pyblish tror fortfarnade att asset finns kvar. Antagligen pga att "Asset set" inte deletas."

Process to find bug:
When working with the container manager:

  • You load a asset into a scene
  • Then you want to delete the asset from the scene.
  • Deleting it from the manager does the job for the reference editor.
  • The asset is still present in the manager window
  • Open up the publishing window
  • The asset is there as well

Categories in Loader GUI

Goal

Visually organise assets according to a "category" in the Loader, such as "Environments", "Characters", "Props", etc.

Alembic import sets timeline in maya

Disclaimer
This is not a feature request

"Import alembic updaterar inte timelinen."

Does the asset loader change settings in maya when an asset is loaded?

Example would it change the timeline IN and OUT

Choose Representation

Goal

Enable use of available representations.

Motivation

At the moment, every version comes with at least two representations. One "primary", such as the cache, and one "secondary" such as the original source file. But only one of these are usable via the Loader at the moment, and is pre-defined to get picked.

Implementation

Add another column for representations.

Animation family not updating its members

Problem

The animation family gets created when an animator loads the rig from the loader automatically with the current meshes in the OUT_set to define what should be exported with alembic.
The problem appears when a mesh gets updated/renamed or completely redone or new meshes get added, the animation family not get updated and in worse cases completely empty och ignored by the publisher.

Suggested Solution

To solve this I suggest that the pipeline always re-creates the animation family when we update the rig though the manager to always keep the animation family up to date.

Disable refresh on publish

Goal

Optimise publishing.

Motivation

Exporting animations can sometimes take longer than necessary due to viewport updates.

Validate Single Assembly

When trying to publish a rig Validate Single Assembly shows error.
This is the outliner for this scene:

Maya Outliner
  box01_:modelDefault
├─  box01_:ROOT
│ └─  box01_:MESH
│    └─  box01_:BOX

  ROOT
├─  MESH
│ └─  box_1
├─  OUT
│ └─  box_2
├─  DATA
│ └─  joint1
├─  Controls
│ └─  nurbsCircle1

  rigDefault_SET
├─  ROOT
├─  controls_SET
│ └─  nurbsCircle1
├─  out_SET
│ └─  box_2
capture160

Reload reference issue in render scene

When I open a saved render scene, the scene is empty. The folders are there, and the references are referenced in, but I have to reload the references to make them appear in the scene.

Zero out meshes on History Look publish

Goal

Optimise load and file-size on published history lookdev assets.

Motivation

At the moment, meshes involved in lookdev are exported alongside the asset. Once loaded however, these meshes are entirely discarded. This means publishing includes writing these meshes to disk (megabyetes to gigabytes of data) and later reading these meshes from disk, with no real benefit.

Implementation

On publish, not only disconnect original input, but temporarily replace it with an empty mesh. This will cause Maya to flush the input with empty space. Once publishing is completed, we can re-connect the original input.

Choose Version

Goal

Facilitate latest, approved and deprecated versions.

Motivation

At the moment, loading an asset automatically picks the latest version. Sometimes, versions carry specific meaning such as whether or not a version has been reviewed and approved. Other times, some versions are known to work better, perhaps because one was just published broken.

Implementation

Append another column to the interface.

Back 2.0

Goal

Make moving backwards in mb.bat more predictable and stable.

Motivation

At the moment, back.bat performs the action of erasing previous changes - such as entering an asset - and taking you back to where you were before. It does so by erasing environment variables, but it couldn't erase those that were appended to e.g. PYTHONPATH without risking erasing more or less than it is supposed to.

:: Enter a project
set PYTHONPATH=/asset/scripts;%PYTHONPATH%

:: Back
:: How to erase an inserted path? Was there more than one inserted path? Who knows?
set PYTHONPATH=???;%PYTHONPATH%

Implementation

When typing cmd from an existing terminal, your current environment is duplicated and put into a new terminal session. At this point, you are in a shell within a shell. Basically like Inception. And like Inception, you can exit one session which would take you back to the previous session.

$ echo %MYVAR%
%MYVAR%
$ cmd
$ set MYVAR=Hello
$ echo %MYVAR%
Hello
$ exit
$ echo %MYVAR%
%MYVAR%

The above session illustrates this in action. A new session is entered, where we attempt to echo a variable we haven't yet created. We then enter a new session, create it and echo it successfully.

We then type exit which closes this session, restoring the environment to how it was before entering it. In this case, %MYVAR% is removed.

The same applies to appending to variables, such as PATH and PYTHONPATH. Modifying these in a session within a session enabled the user to exit and restore the previous environment.

Launcher GUI

Goal

  • Simplify entering assets for artists
  • Simplify creating assets for supervisors
  • Enable implementation of #25 and #31
  • Notify local power users (e.g. Arvid) of updates to mindbender-core
  • Notify remote users of updates to mindbender-setup

Implementation

The GUI should feature a method of choosing (1) project, (2) asset and (3) app along with the ability to create new (1) projects and (2) assets.

 ___________________________________________
|                                        _ x|
|___________________________________________|
|project1      |assets| shots|app1          |
|project2      |______|______|app2          |
|...           |asset1       |...           |
|              |asset2       |              |
|              |...          |              |
|              |             |              |
|              |             |              |
|              |             |              |
|_____________+|____________+|______________|

It should also facilitate editing of projects and assets, such as frame rate and start/end frame, similar to the Attribute Editor of Maya.

 _____________________________________________________________
|                                                          _ x|
|_____________________________________________________________|
|project1      |assets| shots|app1        |key    value       |
|project2      |______|______|app2        |key    value       |
|...           |asset1       |...         |..     ...         |
|              |asset2       |            |                   |
|              |...          |            |                   |
|              |             |            |                   |
|              |             |            |                   |
|              |             |            |                   |
|______________|_____________|____________|___________________|

How do you contiunue on someone elses work?

When you publish, you export from you work what the next artist needs.

What happens when you do a hand over ? when you have done 80 % of the job and the next artist is suppose to complete it.

Can we create an extra option in the loader to load the ".source" instead of the export ?

Asset ID

Goal

Enable directly referencing an asset, subset, version and representation via a short and unique identifier, such as 344574b2.


Motivation

api.ls() reads from disk and returns a fully populated dictionary for each asset, which leads to unnecessarily high disk usage.


Implementation

This is the current interface.

from mindbender import api

for asset in api.ls():
  print(asset["name"])

If you know the asset, there is currently no other way of receiving a dictionary for any particular asset, without looping through them.

for asset in api.ls():
  if asset["name"] != "MyParticularAsset":
    continue

  assert asset["name"] == "MyParticularAsset"

This is how search currently works.

# Version 3 of modelDefault for Bruce
asset = next(api.search("Bruce/modelDefault/3.ma"))

Worst case scenario, ls must loop through every asset on disk before finding the one you are looking for. Because each asset contains one or more subset, each subset contains one or more versions and each version one or more representation, this ultimately leads to a O(n^3) efficiency. In practice, a simple lookup on a regularly sized production set may take several seconds.

By instead keeping an index of every asset, subset, version and representation, then there would need to be no search and one could look up any of these components in O(1) time. To not depend on naming, an index could instead consist of a random string, such as (part of) a UUID.

index = {
  "344574b2": {"schema": "asset", "name": "Bruce", "subsets": ["6434t54v", "fd353h1y"]}
  "7df16gdv": {"schema": "asset", "name": "Stella", "subsets": ["kr34j513", "3l25l464"]}
  "1hdf16gdv": {"schema": "version", "version": 4, "representations": ["u2hc513", "6u1l464"]}
}

# Instantly fetch a specific component
v004 = api.find("1hdf16gdv")

The index may then reside within a database, hosted either locally or even remotely (such as in Firebase). Locally is of course preferable for speed and reliability, but would require at least one dedicated computer with guaranteed up-time.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.