Giter Site home page Giter Site logo

pyblish-base's Introduction

title

PyPI version Build Status Gitter

Pyblish

This project contains a known (but possibly old!) working combination of supported projects. See each corresponding project for latest up to date information, along with our forum and chat room below.

You may also be interested in..




Installation

Pyblish is available for Windows, Linux and MacOS, Python 2 and 3 via PyPI.

$ pip install pyblish

Contents

The PyPI package will install the latest known working combination of Pyblish projects, see setup.py for details.




Usage

Once installed, you will find a number of new Python packages available, most prominently pyblish.api

import pyblish.api
pyblish.api.discover()

See the README of each individual project for details, here are a few.

pyblish-base's People

Contributors

aardschok avatar antirotor avatar bigroy avatar buddly27 avatar darkvertex avatar davidlatwe avatar gitter-badger avatar hannesdelbeke avatar illicitit avatar iwootten avatar jboisvertrodeofx avatar jedfrechette avatar larsbijl avatar ljkart avatar mottosso avatar munkybutt avatar mustafajafar avatar p4vv37 avatar paulschweizer avatar pscadding avatar tokejepsen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pyblish-base's Issues

Abstract properties of Filter

It just occured to me that the mandatory properties of Filter et. al. should really be abstract properties. E.g.

class Filter(object):
    @abc.abstractproperty
    def families(self):
        pass

So that whomever implements a plugin, will also have to implement the properties. E.g.

class MyFilter(Filter):
    @property
    def families(self):
        return ['maya']

At the moment, you could implement a plugin, but forget about these properties which would make your plugin not work.

edited: Typo

Published file structure concept

By referencing a lot of pipeline structures and from experience the folder structure is something which is specific to each company depends on their different requirements.

So what I would like to suggest is that we can make the folder structure configurable which is flexible and can focus on how data is being processed.
For the time being we can take one default structure and start work on it.

Eg structure:

For Asset

/published
  /base_type(char, props, sets, rig)
    /asset_type(forest)
     /asset_name(forest_01)
       /department
    /version
     /scenes
       file.ma
     /images

For Shot

/published
  /seq
    /shot
      /department
    /version
         /scenes
          file.ma
         /cache
         /images
         /movie

is that fine for you?

QML

I'd like to propose evaluating QML for the GUI portion of this project.

For those new to QML, I would recommend wetting your beaks in these videos.

In a nutshell, QML is the successor of Widgets - officially - and the future of UI development - in my opinion :)

Some of the immediate benefits are.

  • Performance - identical to native Qt code (because it is)
  • Look - styling and animations are a primary concern of the language itself, think CSS
  • Future - QWidget is officially complete and won't be getting any new features.
    (Remind me to post a reference to that statement)

Whereas some disadvantages are.

  • It's new to us, which means we'd all be re-learning things
  • It's new technology, which means bugs with scarce resources for help

Some learning resources.

  • Qt DevDays - These are some recordings of conferences held about Qt in general, including QML and should provide some perspective on what is it and where it is going.
  • QML Book - A nice getting-started guide
  • Limetext - A Sublime Text port to Go and QML, for some QML used in an actual application.

Discuss!

Storing persistent state

I encountered an issue whilst setting up a character for animation.

Backstory

In a nutshell, it has to do with the value of certain attributes upon the asset being published, such as visibility and connected animation. Whilst developing, it can sometimes be helpful to apply certain values to attributes, possibly for better visualisation whilst working (such as having joints visible) or for feedback during weight-painting (such as having animation on controls).

I typically suck it up and make sure to remember toggling these things off, ensuring channels are defaulted back to zero and to remove any animation within the scene. The issue with that is that it isn't very flexible. Once the animation is gone, I'd either have to undo once publishing finished or re-apply. But my memory isn't perfect and I often encounter situations where one thing or another was forgotten upon publish.

On a longer project, I made an effort to automate the process of storing state and applying it upon the asset being published. This worked very well and allowed me to take altering attributes to the next level; not having to worry about being a human reference counter is truly liberating.

Maybe Publish could fill this void too.

Use-case

The workflow I'm imagining is this:

  1. Artist creates asset
  2. Artist records state at which asset is to be published.
  3. Artist modifies state
  4. Artist publishes asset
  5. Publish stores the current state and applies original state
  6. Asset is published
  7. Previous state is re-applied

Technically

For things such as attribute values, this is relatively straightforward. For animation curves, I'd imagine temporarily disconnecting the nodes from their channels upon publish, and re-connecting them afterwards.

Goal

The end result is an asset being published with identical settings (as we've already spoken of) but also an identical state. An alternative approach may be to introduce additional validators, which would result in the same thing - e.g. a consistent state - but without liberating the artist from applying settings that may help with development.

I'm not sure how applicable this is to other areas, rigging is possibly an area where state is most important and most often altered. Let me know what you think!

Best,
Marcus

Existing publishers/references

If anyone knows of any existing publishers out there, either in the form of github projects, or just glimpses of it in showreels, could you share it here?

The only one I know of is Quality GuAard but I remember seeing reels with publishers before, I just can't seem to find any.

Pose scenarious to stakeholders

Try and provide a minimal overview of scenarios, followed by a more thorough version, and present it to stakeholders and gather their preferences and requirements.

Refactor Filter and Selector to both take Context as input

@tokejepsen mentioned here in #40:

Maybe we can even initialize both Validators and Selectors with a context, and get that same base class?

Which rang a bell in my head and I'll have a look at implementing this tomorrow morning.

Currently, Selectors are instantiated with a Context and adds Instances to it, whereas the other filters - validation, extraction and conform - are each instantiated with an Instance and operates on its members (the nodes in the scene).

In a way, we have been pre-determining which instances goes with which plugin. Instead, we could potentially pass the entire context to each validator and trust it to only pick instances relevant to itself. E.g. passing a context of 3 instances - e.g. 2 of family "model" and 1 of family "animation" - to a validator of family "model" would choose to only validate the 2 models, and leave the animation family alone.

Software support

Let's determine the precise software that will be supported; that will help determine the type of technical limitations we need to work within as well as which Python version to work with amongst other things.

Some suggestions:

  • Autodesk Maya, versions 2008 and upwards.
  • Autodesk Softimage, since Python was supported
  • Autodesk Mudbox
  • Foundry Nuke, since Python
  • Foundry Mari
  • SideFx Houdini
  • Blender

Ideally, these would also see some form of support, possibly without integrations:

  • Adobe Photoshop
  • Adobe After Effects
  • Adobe Premiere
  • Pixologic Zbrush

For games, these may also be included:

  • Unity
  • Unreal Editor

Feel free to suggest more, along with why you think they should be included.

Pyblish

As discussed in #56 and #13, Publish has undergone a name-change, but will not turn into a namespace package.

Introduction

Because the change involves renaming this repository, I'd just like to take a moment to explain how to move on from here. As you have probably already noticed, Publish is called Pyblish, but all of your original links are preserved. This means forks are okay, external links are okay. In general, nothing has changed except for how you pull.

Pulling

Your fork will remain the same, including the name. This is fine, forks can be named anything. It's simply convenience that they are named the same. But if you'd like, and I recommend you do, you can rename your fork to pyblish. Nothing will break, promise.

Finally, you'll need to configure your "remote" in git to point to the new repository, so that you can continue pulling updates from it.

http://stackoverflow.com/questions/2432764/change-the-uri-url-for-a-remote-git-repository

And that's it.

Some things will be off at first, like image links on the front-page, but do keep an eye out for other issues and if you have any trouble, post here.

Best,
Marcus

Project website

We'll need a proper website with logo and introduction to the project. If this is you, let us know here!

Address suggestions:

  • publish.io (n/a)
  • publish-app.org
  • publishcg.io

Node-based workflow

Branching off of #41 to focus the discussion on the alternative, node-based workflow.

Related:

Do you think that it would be possible to implement coral within the tool
or would you rather create your own version?

The issue with using Coral as host for our processes is that most of the project is written in C++ and would require us to also provide build instructions for our users, which vary per platform and can get quite lengthy. And as Publish doesn't yet have a need for performance, most of its benefits would go unnoticed.

I've had a quick look at Depends yesterday and it might be a better fit, being pure Python and PySide. It isn't being developed with Windows in mind, but it ran just fine and I implemented a basic "recipe" and node in just an hour or two.

It does however (as far as I can tell) mainly concern stateless processes - in that each node represents a new process which takes arguments as input and produces results via stdout. For us, this would mean launching a new instance of our host per plugin, as each process is unaware of any other process (hence being "stateless") and could thus not utilise each other's, already running process

Provide debug prints for plugins

At the moment, it's quite tricky to see what's going on when running the various plugins.

Each plugin is responsible for providing its own debug information, which is cool, but doesn't provide consistency and can easily be forgotten. Instead, we could provide built-in statements - like "Running.." and "Passed.." - that get run automatically, alongside any custom messages produced by each individual plugin

# The super() method could hook into already-implemented logging
class MyValidator(Validator):
    def process(self, context):
        super(MyValidator, self).process(context)

# Or, the baseclass could provide manual hooks
class MyValidator(Validator):
    def process(self, context):
        self.log_start()
        # ...
        self.log_finished()

# Or, a metaclass could append functionality implicitly
class MyValidator(Validator):
    def process(self, context):
           # ...

Phase 1

Liju,

What do you think about this for a first version.

The resulting paths for publishes land directly within the currently active project of Maya:

# Example
/
    project/
        scenes/
            my_scene_v001.ma
        publishes/
            v001/
               asset.ma

That way, publishing will always work without any particular setup. Once that works, we can think about how to re-direct the output elsewhere, but this could potentially be a fall-back/default in cases where nothing has been configured.

# Assets go here
publishes/
    v001/
        asset.ma

# Playblasts for review go here
publishes/
    v001/
        asset.img/
            0001.png
            0002.png
            0003.png
            ...

# Pointcaches (sequence)
publishes/
    v001/
        asset.pc/
             0001.poc
             0002.poc
             0003.poc
             ...

# Pointcaches (single file)
publishes/
    v001/
        asset.abc

# Multiple representations
publishes/
    v001/
        asset.ma
        asset.pc/
             0001.poc
             0002.poc
             0003.poc
        asset.abc
        asset.img/
            0001.png
            0002.png
            0003.png

Pull-request workflow

Like mentioned in one of the other issues, the workflow for pull-requests is something we'll have to discover along the way. There will inevitably be some conflict between our requests, but I think we'll get used to it eventually.

If anything comes to mind or if you're having trouble, this is the issue to take it up in. For starters, I just merged a few accidental branches that only adds new content and doesn't modify anything (mainly Wiki images) and thus shouldn't interfere with any of your local branches.

Best,
Marcus

Metadata persistence

We'd like Publish to facilitate metadata being written to where it is typically written within your organisation.

However, since "typically" is most likely different everywhere we look, a compromise must be struck. A default that will satisfy those without any particular requirements, and configurability for those with more specific requirements.

To perfectly pinpoint what we mean by "metadata", here's a list of examples:

  • modifiedDate Date at which resource was last modified
  • publishedDate Date at which resource was published
  • loadedDate Date at which resource was loaded into host
  • author Author of publishes resource
  • links Names of related resources

Open Metadata is a library for metadata persistence that doesn't require any setup, let's see how it would look.

import openmetadata

def publish(instance):
    # Instances are validated and extracted
    ...

    # Metadata is written
    openmetadata.write(resource_path, 'modifiedDate', time.time())
    openmetadata.write(published_path, 'publishedDate', time.time())
    openmetadata.write(published_path, 'author', getpass.getuser())
    openmetadata.write(published_path, 'loaded', cmds.getAttr(instance, 
                                                              'loaded'))

The writing could be separated into an individual module so that others could hook into it for custom needs.

def publish(instance):
    # Instances are validated and extracted
    ...

    # Metadata is written
    metadata = {
        'modifiedDate': time.time(),
        'publishedDate': time.time(),
        'author': getpass.getuser(),
        'loaded': cmds.getAttr(instance, 'loaded')
    }

    write_metadata(metadata)

def write_metadata(metadata):
     """This is overriden per-studio""""
     write_openmetadata(metadata)

studio_publish.py

import openmetadata

def write_openmetadata(metadata):
    """This would be implemented per-studio"""
    for key, value in metadata.iteritems():
        openmetadata.write(path, key, value)

Installing and Configuring

Would like discuss about tool installation and configuration

Installation can be using pip or by any other way?.

pip install publish

I would like to keep a Publish configuration directory somewhere in the disk, by having a folder structure where we can keep all associated files and folders. e.g.:

/Publish
  /Projects
     /my_project
        /app_01
        /app_02
        /global
    /global

Thoughts?

Plugins Logging

Fiddling with a validator now, wanted some feedback from the plugin so I just used a print statement. Shouldn't we implement a logging from the plugins?

Extraction plugins are solely responsible for placing files

This is very powerful, but won't do much good if each plugin stores data in different locations.

What would be better, is if the baseclass for Extraction could provide a mechanism with which to finally place file on disk where they belong, so that a Conform plugin could take it from there.

# Currently
class MyExtractor(Extractor):
     def process(self, context):
         # Produce file in a temp directory
         files = ['file1', 'file2']
         # Move files to extraction destination
         shutil.copy('/my/unique/directory')

# Ideally
class MyExtractor(Extractor):
     def process(self, context):
         # Produce file in a temp directory
         files = ['file1', 'file2']
         # Move files to extraction destination
         self.commit(files)

Plug-in system

To get the ball rolling on a potential plug-in architecture, I'll make some suggestions focusing on validations that might fit equally well for other aspects such as selection and extraction.

Assumptions

  1. Most if not all validations will be specific to a host; e.g. Maya validations probably won't apply to Houdini.
  2. Our included validations (if any) will most likely reside in one place (e.g. the main Python package), but users will probably (and should) want to store their own validators separately; e.g. in their own repositories.
  3. Validations may apply to multiple hosts
    • e.g. .. is this possible?
  4. Validations may apply to multiple families
    • e.g. checking for unique names applies to both models and rigs

Requirements

Based on the above assumptions, some of the requirements may be:

  1. The location of potential validators must remain configurable

  2. There must be support for multiple locations

    • /publish/validators
    • + /root/studio_validators
    • + c:\more\validators
  3. Validators must identify which families they support

  4. Validators must identify which hosts they support, e.g.

    # unique_names_validator.py example
    __families__ = ["model", "animation"]
    __hosts__ = ["maya"]
  5. Validators must conform to a common interface, e.g.

    >>> for validator in validators:
    ...     try:
    ...         validator.run(context)
    ...     except:
    ...         if not autofix:
    ...             raise
    ...         try:
    ...             validator.fix(context)
    ...         except:
    ...             raise
  6. Validators must be importable from any environment

    • e.g. A module written using maya.cmds won't be importable from the outside and thus not be introspectable for families or hosts

Implementation

These are some specifications for the Pull-request. Implementation is in plugin.py and tests are in register_validators_test.py

Logic can be summarised in three steps:

  1. Register path with validators
  2. Collect validators from registered path(s)
  3. Use validators

Here is a summary of the behaviour:

(1.) Validations are separate modules
(2.) Each module is known as a plugin
(3.) The path for where plugins are stored are augmented by the user, e.g.

 >>> import publish.plugin
 >>> publish.plugin.register_plugin_path('c:\my\validators')

(4.) Each plugin is wrapped in a Validator class which forwards the respective interface of each module. e.g.

 # unique_names_validator.py
 def process(context):
     pass

 def fix(context):
     pass

 # class interface
 validator = Validator(...)
 validator.process(context)
 validator.fix(context)

(5.) Each plugin is made importable by any process by performing localised imports, e.g.

def process(...):
    import host_dependent_module

Creating a separated menu for pyblish

How about having a separated menu for pyblish rather messing up with the File Menu, which help us to put all pyblish related tools in a separated section, easy to find and optimize.
thoughts?

Kickstarter-style roadmap

On any project on Kickstarter.com, a roadmap is laid out and augmented as the project comes along. Explore the possibility of having something similar for Publish, in an attempt at gathering interest and a better understanding of where the project is currently at; now and in the future.

Graphical User Interface Design

@madoodia is getting to work on a GUI for Publish so I figure it would be a good idea to summarise what it should do.

In order of relevance:

  1. I think the main thing to have a look at is visualising what has been selected, and what validators and extractors are going to be applied to each instance.
  2. Modifying what has been selected, including its configuration. This would involve also visualising the configuration of each item.
  3. Possibly as a separate GUI, but some means of aiding in selecting instances within the scene. Manually adding to a set or what not isn't as user-friendly as it could be.
  4. This is where I see the node-based workflows come in, to construct or visualise how instances will traverse through validators and extractors and to possibly allow for their path to be modified and possibly saved out and shared.

If you can think of anything else, this is the place to put it. I'd imagine we upload mockups of the GUI we envision right here, so we can talk about them together. I've posted the few references we have over in the Wiki - right here - but we should of course strive to be different, to be better. :-) Let's go!

Provide tests for various scenarious

Let's try and develop Publish using TDD. As such, provide test scenes with data prepared for publish under various circumstances. These scenes should then be publishable without modifications by Publish.

Roadmap to 1.0

Hi everyone,

I've sketched out how I envision our roadmap from where we are today, the end of August, up till our first 1.0 release.

roadmap1

As you can see, we've started out small with small ambitions and have grown to envision Pyblish being a really useful tool in lots of situations for a lot of different people. We've begun work on multiple areas at once, so that you can all contribute with your interests and expertise without having to change too much of the main functionality. This means development will be able to accelerate, slowly but surely.

1.0

We're also getting close to a release usable by others. Now, I know this is a little scary and may feel premature. Committing to a 1.0 means that we will have to stand by and support our mistakes and awkward implementations and workflows.

However, everything we learn from the 1.0 will be directly applicable to a 2.0, and a 2.0 doesn't have to be backwards compatible with the 1.0, thus allowing us to scrap old code in favour of new code.

The critical point of a 1.0 is to get something into people's hands. Something they can confidently develop with, without feeling like their plugins will break with the next point release (e.g. 1.1). Only when we can get it into peoples hand and into actual productions will we be able to draw estimates on where to go next, what to improve and what to leave as-is.

It's only a sketch

Yes, take it with a grain of salt. It will most likely change a lot, and that's a good thing. It means our understanding of the project and its goals have increased. One of the thing that may change a lot is our node-based workflow, and that's completely fine. At no point should we stop dreaming and coming up with awesome ideas. If the idea doesn't fit the next release, we'll keep it for the next. And that's how it'll go.

Google Groups

In addition, this will be the last "discussion" I'll be posting in the issues section of GitHub. I'd like to propose moving discussions into our brand new Google Groups, located here:

And keep issues about issues that can be fixed. If you have something to add, ask yourself "can this be fixed by a pull-request?" If the answer is "Yes", then it's an issue. Otherwise, it's a discussion and belongs in the Google Group.

Anything left out?

If you find something missing from the Roadmap above, do speak up and we'll talk about including it in the 1.0.

Best,
Marcus

Shot publishing

While writing a shot-publishing test, something occurred to me. We might either have to start making assumptions or providing configurable defaults for determining how to determine the output from various instances within a scene.

In this shot, the animation curves of Peter is expected to be published under the given frame range. But how does Publish find the actual curves, without making assumptions about the instance?

Relying on conventions of TDs

One suggestion could be to enforce that instances contain a particular set under a given name or containing a certain attribute. Publish would look up the set and publish what it finds. But this is touching on management and workflow, which I'd rather steer clear from.

Relying on conventions of Artists

Another suggestion may be to not include instances in the publishable set, but instead put the animation controls there directly.

However this would mean exposing the user, who would have to make this selection in the first place, to internal details of rigs and what not. Simple for Publish, not so simple for artists.

For in-house publishing tools, this is less of an issue, as they could rightfully hard-code certain conventions that work for them.

Thoughts?

Plugins within identical file-names are duplicated

When importing plugins from a module select_environment.py located within one directory and another file of identical name within another directory the contents of said module is appended to rather than replaced.

I've narrowed it down to a caveat within the imp module which is used for dynamic loading of Python modules.

>>> mod1 = imp.load_source(name='select_environment', 
                           pathname='c:\some\path\select_environment.py')
>>> id(mod1)
55555551
>>> dir(mod1)
UniqueClass1
UniqueClass2
>>> mod2 = imp.load_source(name='select_something_else', 
                           pathname='c:\some\path\select_something_else.py')
>>> id(mod2)
55555552
>>> dir(mod2)
UniqueClass3

This is what we would expect, the two modules take up separate portions of memory and are thus completely separate objects. Loading one does not have any affect on the other; i.e. there are no side-effects.

>>> mod1 = imp.load_source(name='select_environment', 
                           pathname='c:\some\path\select_environment.py')
>>> id(mod1)
55555551
>>> mod2 = imp.load_source(name='select_environment', 
                           pathname='c:\some\other\path\select_environment.py')
>>> id(mod2)
55555551

Note the difference in its pathname, but identical name.

Here on the other hand, memory locations are identical, suggesting that imp has replaced mod1 with mod2. What really happens however is a merge.

>>> mod1 = imp.load_source(name='select_environment', 
                           pathname='c:\some\path\select_environment.py')
>>> dir(mod1)
UniqueClass1
UniqueClass2
>>> mod2 = imp.load_source(name='select_environment', 
                           pathname='c:\some\other\path\select_environment.py')
>>> dir(mod2)
UniqueClass1
UniqueClass2
UniqueClass3

Note that UniqueClass1 and UniqueClass2 is present in both mod1 and mod2

This caused plugins to accumulate and cause lots of duplicated plugins to be processed whenever they were sharing names.

Instances and Plugins

Im getting a bit confused as to how to handle instances in a validator. Currently I'm using the tagging of individual nodes, so the each instance is a node. The problem comes when the instance is a group or objectset.

How am I supposed to know what the instance is, without doing lots of checks in the plugin?

Filter -> Plugin

I think it makes more sense to call our current superclass Filter for Plugin, as it is a superclass for all of our plugins. We can reserve the terminology for the "pipes and filters" implementation we'll use for the node-workflow.

Any objections?

Data flow for GUI, Maya and Logic

I've got an idea of how to structure the flow of data within the Publish components.

The two proposals boils down to:

Components

At this point, we've got three components:

1. Host: Contains the information we're validating/testing,
   along with persisting the settings we specify for each instance.

2. GUI: Visualises these settings and allows the users to modify them.

3. Behaviour: This is what actually computes - e.g. validating,
   extracting, parsing selection.

Responsible UI

One way of delegating responsibility between these components is this:

flow-or-information2

(4.) Maya transfers persistent settings into the GUI where users can then modify them
(5.) Once all is clear, the information is passed from both GUI and Maya onto Logic which performs computations.
(6.) Data is persisted within GUI; meaning externally as JSON/YAML etc.

The disadvantages of this approach are:

  1. Responsible UI; crash, misbehaviour or corruption influences output
  2. Difficult to debug; both for users and developers as all is contained within the GUI
  3. Duplicated data; Maya will have to persist some settings, and the UI will need to stay up to date

Responsible Host

Alternatively, we could give Maya the full responsibility of data persistence and use the GUI for visualisations of modifications only - no persistence.

flow-of-information

(1.) Both the GUI and Users are permitted to modify the configuration,
(2.) All data is stored within Maya
(3.) Once configured, the GUI is no longer necessary, and Maya can communicate solely with Logic.

The benefits of this approach is:

  1. Simplified UI; less responsibility, can focus more on making it pretty.
  2. Simplified debugging; the UI would be optional, all can be done via Host
  3. Script-ability; users could potentially write their own GUIs or tools and disregard ours.
  4. Batch/distribution of publishing is free
  5. Single source of information/truth

Let me know if anything is unclear.

Best,
Marcus

Writing tests

I had an idea about trying to implement Publish using a test-driven development style, which in a nutshell means to write tests before code.

In our case, since we're developing for content creation, tests could potentially take the form of content. Content that would be successfully published with Publish once development on a particular feature is complete.

I've tried summarising a suggested workflow for working with tests, as well as provided some examples on tests that should run successfully once Publish is in a working state.

What are your thoughts on this?

https://github.com/abstractfactory/publish/wiki/Tests

Integrations

Hi all,

I'd like to propose a few integrations, along with formalising the one we've been working on for Maya, in accordance with the current Pull-Request about preparing for GUI development.

I've written up a WIki page for each integration, stating the problem domain and a potential architecture, but here's a summary.

asana-publish

Publish for Asana - Essentially in the form of a few plugins, conform plugins. Each plugin could be responsible for a particular update, such as commenting on tasks associated with the particular asset being published, or updating the project status as a whole. Users subscribed to tasks would then be able to retrieve updates via mail or directly in Asana.

git-publish

Publish for Git - Sort of a "eating your own dogfood" whilst developing. :) This integration is about running validations upon each commit, push, pull or merge, either directly after or before.

maya-publish

Publish for Maya - Finally, what we've been doing so far, but in alignment with other integrations.

The way I'm imagining this to work is for each of these to have their own corresponding GitHub repositories and for each to be installable via pip.

$ pip install publish
$ pip install publish-asana
$ pip install publish-git

Further

We spoke briefly about how we can safely assume any particular conventions employed within any production; like assuming everyone uses references or namespaces for rigs in Maya or whether they use Shotgun for asset tracking.

I think that with integrations separated this way, we could potentially provide a full suite of configuration options, with GUI's, per integration.

For example, to use the Asana integration, we could assume that users tag their selections (be it object sets or transforms etc.) with the appropriate task reference within Asana and that this tag is called "asanaRef".

We could also provide an individual configuration setup for Asana, with this as an option, possibly with its own GUI to go with it.

# config.json
{
    "asanaReference": "my_custom_name"
}

On that note, we could start providing much more specific integrations, that make assumptions about how people rig or model or texture and simply provide configuration for when things differ.

class ValidateCharacterControls(Validator):
     def process(self, context):
         """Stress test controllers

         The test will ensure that no ik-flipping occurs and that the
         mesh isn't experiencing stretch beyond 140%

         """

This particular example assumes that controls are readily available, and that there is some means of conforming the amount of stretch taking place upon a mesh, such as via SOuP or Fabric.

Even further

With this in mind, it would allow third-party developers the ability to provide suites of "best practices" or "production techniques" as an individual bundle, installable via pip, that captures how a particular studio or product does things.

$ pip install publish-ilm-rigging
$ pip install publish-glassworks-conform
$ pip install publish-shotgun-review

Ultimately facilitating conventions on a global scale that can be reused and perfected by anyone.

Stage 3 - Hack and slash

What do you guys think about moving on from Requirements gathering to getting Publish up and running?

https://github.com/abstractfactory/publish/wiki/development-status

It doesn't mean we won't find new requirements or won't be able to come up with new ideas, just that the ones we've currently got are sufficient in getting us off the ground and into writing some code!

My suggestion is for us to get a working copy up and running, with enough room for individual contribution, and to keep it running from now on. As in, at any point should anyone be free to grab a clean copy of our repository and start using it, regardless of what's happening behind the scene.

The target audience at this point is us and not necessarily any real users, with the goal of each of us being able to experiment with new ideas with working code and to directly contribute value whilst ensuring that changes doesn't eliminate or break any currently working feature.

I think we're ready! Let me know what you guys think!

Best,
Marcus

About versioning

Hey Guys,

A quick question about version numbers, why we need 0 in version numbers like v001. Instead of that we can just start with v1.ma which will give us more flexibility when we sort stuff right ?.

Logo

I've got a few suggestions for the Publish logo, let me know what you think @ljkart

1

2

3

4

5

6

7

8

9

Auto-fix for Validators

  • Function to execute "fix" function on validators
  • Way to find out whether or not a validator has auto-fix code
  • Validators "fix" function needs to return errors

Coding tasks

Wanted to pitch what the coding tasks would be;

  • Nodes to publish function
  • Output Classes
    • Validators
    • Output Data templates
  • Output Data Writers
    • File destination

Let me know what you think?

Testing individual plugins

So I was trying to troubleshoot a plugin, and couldn't find a way to isolate the plugin.

import pyblish
import pyblish.backend.plugin
import pyblish_maya.plugins.extract_review
reload(pyblish_maya.plugins.extract_review)

cxt = pyblish.backend.plugin.Context()

for instance, error in pyblish.main.select(cxt):
    print "Selected {0}".format(instance)

review = pyblish_maya.plugins.extract_review.ExtractReviewAsPng()
review.process(cxt)

Nothing happens when running this code.

Selecting using objectSet/Selection Sets

There's been some enquiries about why using sets is a good idea so I'll try and take a step back and look at some of the concerns sets help solve.

For starters, by "selecting", I'm referring to both specifying what to publish but also how to publish it.

Here are some suggested requirements assumptions.

  1. Publishing is an incremental operation
  2. Not everything in a working scene may be meant for publishing
  3. Publishing is a common operation that happens often
  4. Visualising what is about to be exported is important for quality control
  5. Publishable scene should be testable
  6. Some nodes can't fit in a transform node (Maya)

I don't particularly mind which exact node carries which data, sets just seemed to me to solve these issues quite well and I didn't see a reason not to use them, but I'd be happy to hear what you find discouraging about them.

  1. By incremental, I mean that once settings for a publish has been set, such as frame ranges and author and what not, they most likely won't need to be changed and can remain throughout the next publishes. In the case of sets, attributes can be stored on the node that hold these settings. If these were to instead be stored within the tool, we would either have to get clever and let the tool know which persistent settings to look up or reset the tool upon each run, which would invite inconsistency between publishes.

  2. (and 3). Let's talk rigging as an example. Most things may be located within an object hierarchy, but some things may be work in progress, not being ready for this "release". Other things may never be meant to follow along; such as measurements, statistics or other debug nodes. We will need a way for artists and TDs to distinguish between what goes, and what stays. With sets, included items are visible from the outliner and can be modified by anyone.

    For example, including a set of geometry:

  3. I personally feel that it should be as simple as possible to perform a publish, and that the UI then should offer a minimal set of configuration options. After an initial publish, the only new information between one publish and the next could potentially be a changelog or comment. Can you think of anything else?

  4. When everything is pre-configured within a scene, publishing can be batchable and testable without options or GUI.

  5. There may be nodes involved in a publish that a transform node can't carry; such as other sets, script nodes or any arbitrary non-dag node

    # For example, from the command-line
    $ publish --path=/spiderman/my_scene.mb

What do you think about these requirements and their solutions?

Publish as a Namespace Package

I just came across an interesting feature of Python that I'd never heard about before and figure I'd initiate a discussion about the prospect of using it for Publish.

The Problem

The problem is related to dividing our project into individual repositories and thus ending up with lots of top-level packages on the users file-system. For example, if we split up the project without making it a namespace package, it may look something like this:

  • dir publish
  • dir publishmaya
  • dir publishhoudini
  • dir publishasana
import publish.backend
import publishmaya
import publishasana

To install, we could do

$ pip install publish
$ pip install publish-maya

The Solution

  • dir publish.main
  • dir publish.maya
  • dir publish.houdini
  • dir publish.asana

If we instead split Publish up as a namespace package, we could get the same benefits as with separated repositories and instead end up with:

import publish.main
import publish.maya
import publish.asana

That is, maintaining everything within a single namespace - publish

Development, distribution and installation works similarly.

$ pip install publish.main
$ pip install publish.maya
$ pip install publish.asana

With the added benefit of opening up for using "publish.main" or similar as opposed to "pypublish" as "publish" is already occupied.

How it works

A namespace package is..

..a mechanism for splitting a single Python package across multiple directories on disk. - PEP420

References

Which in practice means that we can do:

$ pip install publish.main

At this point, we've got what we would expect:

>>> import publish.main

Which, when installed, looks like this:

  • dir site-packages
    • dir publish
      • dir main
        • file__init__.py

Note that there is no file __init__.py in dir publish.

However, we can't do:

>>> import publish.maya
ImportError("No module named maya")

Until we do:

$ pip install publish.maya

Which installs an additional sub-package of publish, as such:

>>> import publish.maya

So now the install looks like this:

  • dir site-packages
    • dir publish
      • dir main
        • file__init__.py
      • dir maya
        • file__init__.py

Conclusion

I've never worked with this before and so it's possibly buggy or illogical in other fronts. The only benefit I can think of is cosmetic, can you think of anything else?

I got inspired by this repository, which is distributed as a namespace package:

So, question is, cool or not cool?

Google Groups/Mailing list

Do you think we need a Google Groups? E.g. like the Maya-Python group.

The GitHub issues are essentially like a forum (at least the way we've been using it) but maybe it isn't as great for archiving and searching amongst older posts, or for more casual or general conversations.

Tests sometimes fail

This turned out to be due to tests modifying pyblish.backend.config whilst at the same time testing it for correctness.

Define a set of scenarious

Scenarious which the system is likely to encounter. These will shape development and keep efforts focused.

Mission statement

We'll need to find a proper mission statement that clearly, but succintly summarises the project to let newcomers know what to expect.

Licensing

Currently under the LGPL, although this may prevent studios from fully adapting it to their needs. Is MIT better suited?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.