Loading...
 
discussion of global concepts for the future of mirage
For submitting and viewing specific feature requests, please visit the Mirage Feature Requests section on sourceforge.



mirage next version

general comments -

mirage 1.1 is a working video engine for use in networked environments. it features 8 source slots (each can have 1 of camera, movie, still, buffer, feedback, external), 8 effects channels each with a choice of 7 effects (6 standard effects and 1 dynamic effect) and 12 layers for compositing.

mirage is currently based on max and softVNS. this decision was taken 3 years ago due to the speed advantages and relative maturity of softVNS compared to jitter. this situation has now changed and, with the introduction of YUV processing in jitter 1.5 (to be released in sept 2005) there should be little or no speed difference between the two systems. Adding to this the existence of a large development community around jitter and the choice is clear - next mirage, if we stay with max, will be in jitter.

"if we stay with max" - one could argue that it would be best to build the application from scratch using C++ or adapt existing applications such as modul8 ( http://www.modul8.ch ) or veejay ( http://veejay.sourceforge.net ). there are speed and reliability gains here but the main drawback is being dependent on a developer or a group of developers for changes and upgrades - while with a max/jitter based system the system can be open for users to write new effects, sources and so on easily. As for speed (modul8 for example currently far outperforms jitter in reproducing DV quality video) we hope that with corevideo this will no longer be a problem.

analysis of possible development directions :

1. Native Cocoa App (Obj-
C,QT,CoreImage,CoreVideo,QuartzExtreme,Quartz2DExtreme)
2. Jitter based Max patch
3. SoftVNS based Max patch


1. Native Cocoa App (Obj-
C,QT,CoreImage,CoreVideo,QuartzExtreme,Quartz2DExtreme)

Note: After much research, i have discovered that this option is
entirely possible from a technical standpoint.

PROs:
-the application would be of the best quality possible
-there are many good programmers out there that could and would want
to work on this
-it could be truly open source and not reliant on Max
-the knowledge gained from working on it would be very beneficial to
all of us in the long run

CONs:
-added learning curve for those currently involved with the project.
while beneficial in the long run, this will cost _time_.
-we won't be able to reuse any existing code
-the entire development time will be MUCH longer. at least four or
five times as much work, maybe more.
-paying programmers who already have this specialty would cost MUCH
more. even the ones i talked to who were will to take a big cut would
stil cost way more than I cost. on average, they are used to raking
in more than 60 dollars an hour.
-while the number of programmers that know Obj-C is much greater than
know Max (and almost all schools are geared towards C etc..), we
would lose many artists who are really only compatible with Max.

Bottom Line: this option cost too much time and too much money right
now. if in the future we get a *massive* corporate sized grant, i
would love to revisit this option, but right now it's out of our
league. however, i am very happy to have done the research on this.
before, i thought it was just impossible, but now i know that i could
totally do a project like this. all the connections are in place if
we ever get to this point :)



2. Jitter based Max patch

Note: jit 1.5 will almost certainly be out by the end of august.

PROs:
-the application would not be using proprietary code or objects
-jitter objects are used more widely
-there are more jitter programmers than softVNS programmers
-there is a team of people working behind jitter
-historically, max and jitter support is amazing
-there is an SDK for jitter
-jitter allows us to offload the compositing engine onto the GPU
-hardware shaders also allow for an infinite number of dynamic effects
-IT WILL RUN ON A PC! (CIM's iLok will work on PCs _and_ macs!)
-we can reuse some portions of code
-the new jit will speed up processing greatly

CONs:
-none of the VNS specific code can be reused
-it will take time and money to redesign the architecture, rework and
recreate the needed algorithms, and rewrite the software.
-if we want all the effects to happen on the GPU and anything more
than the basic set of compositing effects (add, subtract, blend,
etc..), the computer will need to have an NVIDIA GeForce FX 5200 and
up or an ATI Radeon 9600 and up (i believe a variety of 3DLabs cards
are also supported on PCs).
-if could be costly (processing wise) to use the composited output
(an openGL scene) as another source. there are, however, many
efficient ways around this.
-we'll need to purchase jit 1.5

Bottom Line: this option is replete with benefits! the amount of gain
from this transition would be very good in the long run. i've begun
scouting for a team, just trying to gauge how much interest exists.
here's to hoping the grants come through!



3. SoftVNS based Max patch

Note: Rokeby is working on a new softVNS which he thinks will be even
faster than jitter (this is from a source of mine who's talked with
him directly about it in the last week). it is unknown when this will
happen.

PROs:
-tons of reusable code
-the promise of openGL acceleration
-the promise of hardware shaders
-possibly faster development

CONs:
-one person developing it and support is spotty
-the new features are only promises (david does have a new baby etc..)
-we have no idea when this would happen
-no SDK
-fewer programmers know vns and the community surrounding it is
smaller than that of jitter.
-Mac specific

Bottom Line: while there may be some neat features, using softVNS
closes our options. with today's computers getting faster and faster,
we don't need to wrry about using softvns as much. i recommend
dropping it.



the remaining tasks we have to complete:

1. Design & Architecture
a. brainstorm - how will the program operate high level and
conceptual terms (not programming)
b. the design must be 100% modular!
c. we already did some of this that last night in mulhouse but..
d. we need a proper design session/workshop. i'm doing this
right now with my current residency project and will report
back. if we want this done this right, we need to come up
with a complete and clear design before implementing it in
code. this will actually take care of lots of the
documentation before we ever start coding!! :)

2. Proposal
a. the design is needed before a proper time estimate can be made
b. the design will become part of the proposal
c. the rest of the proposal may include the architecture of flow
of our work, how mirage will be used, the significance of
live video in theater, and an overview of CIM's projects and
vision.
d. we will recognize that the design may need to be altered as
work gets underway. this is natural.

3. Implementation and Programming!
a. needs before work can begin
i. people and contracts
ii. time
iii. money
iv. hardware
v. software
vi. comminucation
b. the work
i. project manager and conceptual visionary (pedro)
a. should be both a user and programmer,
but more user oriented
b. someone who can translate the technical into
the conceptual
c. in charge and works on the budget
d. may work some with programmers
ii. programming team leader (jonathan)
a. should be both a user and programmer,
but more programmer oriented
b. someone who can translate the conceptual into
the technical
c. divides and manages modular programming
d. works directly and often with programmers
e. also works as a programmer
iii. programmers
a. work on parts of mirage
b. may be responcible for a complete section or more
c. must stay in direct communication with programming
team leader about updates, documentation, and
problems.
c. the modular tasks
i. will be fleshed out more after a design is finalized
d. iterative debugging and development
i. more later




some first conceptual thoughts :

It is clear that we should do a careful analysis of what is missing in the current version of mirage and what could be future lines of development - also looking at veejay and modul8 for inspiration. Here are some first thoughts :

1. All sources and effects should be dynamic : by this we mean that each effect and source is a patch that follows some basic guidelines for integration into the mirage structure. Mirage becomes a kind of framework/language for chaining patches together. The strength of mirage is in its naming structure - this is something that must be preserved and built on.

2. composition possibilities are currently very limited in mirage - as well as max/min there should be proper luma and colour keying capacities (see http://veejay.sourceforge.net/middle-gallery.html ) as well as individual mask streams for each layer (the possibility to use any source as a mask - AFTER treatment in the effects chain)

3. Layers should be gl based - permitting all usual 3D transformations - rotations and displacements become then very efficient.

4. variety of outputs : multiple screens (how does this work conceptually - how do u say which output to take from where ?) ; streaming ; recording to disk.

5. i think we should consider mirage as a "dumb" engine - i.e no preset or control systems are included. All controlling happens from a control layer ... that you can make yourself in pure data, max or any other language that understands OSC or that you can use the tools we develop that could be downloaded with mirage or could be a seperate (paying ?) layer ...



/
suggestion from mathieu :main.crossfade with fx transition such as fcp, quicktime examples (iris, wipe, blind, ...) a more traditionnal way of mixing 2 compositing from bus A to bus B and could interest users who have a more traditionnal needs with video mixer.