This a very interesting addition to Syntheyes. The ability to generate rules based on natural language is looking promising in a production environment, giving scripting power to the artist. I haven't personally tested this but I do really want.
Press release bellow
July 24th, 2014. Phoenixville, PA. For immediate release.
The latest 1407 version of Andersson Technologies LLC's SynthEyes 3-D tracking application includes a revolutionary new productivity tool, Synthia. The SynthEyes Instructible Assistant, ie Synthia, enables chat-style natural language control. Unlike simpler assistants such as Siri or Google Now, Synthia responds to specific instructions, simple or complex, and can be instructed by the user in English to add additional functionality. Cloud communications make possible rapid improvement based on users' experiences.
According to company owner Russ Andersson, "Software users in general, and tracking artists in particular, face increasingly complex tasks and shrinking timescales. Pro applications have added many features and much flexibility, but users have less and less time to learn them. Graphical interaction has become complex, with layer after layer of menus and a multitude of icons of uncertain function. Nor do most users have the programming expertise necessary to automate simple tasks using conventional scripting languages. Synthia shows the way to a new future where software is able to understand natural language instructions and perform tasks directly, with a paradigm that is at once very new and very old: the spoken and written word."
Synthia consists of an extensive underlying infrastructure, which has then been taught about SynthEyes. Accordingly, there is the prospect that Synthia may control other applications in the future as well.
Other new features in 1407 include "notes" for communications between tracking
Ssontech posted this neat tutorial that shows how to set up Syntheyes's interface for a stereo shot. Regardless of one's experience, I encourage you to watch it. It's short and informative and might reveal some "hidden" features that you forgot about or just did not knew.
This panorama I made is comprised of two image sets. The black and white images are from MSL rover Navcam taken on Sol 2 (2nd day of the mission) . The colour portions are higher resolution images from Curiosity's Mast Camera (Mastcam) on Sol 3.
What you see below is a quarter resolution version of my results. the final image is 28.862 x 6.830 pixels, meaning 1.97 giga pixels is due to the first high resolution image sensors that made it to Mars. These CCD chips output 1600 x 1200 images. Doesn't sound that much but for the biggest rover sent to another planet, is a big deal.
Enjoy the view.
If the flash panorama viewer dose not load, try reloading the page.
View this at 360cities.net Curiosity's first color panoramic image. Sol2 and sol3 data. Mars Science Laboratory
Today, 1 August 2012, according to this official press release Andersson Technologies will be featuring the 1208 version of Syntheyes camera tracking software. Huge changes are being expected, probably a node based workflow, rolling shutter compensation for solves, improvements in feature detection. Also a new license and release model will be available. I am still waiting to receive a demo version to see the new and improved Syntheyes.
36 shots and about 2 weeks production time. Things I did: Matchmoving, compositing 2 shots, modeling and camera projections for the ice wall.
Everything was shot on a frozen lake with only a few greenscreen shots. All the environments are either extended or replaced. The underwater part was shot in a pool on bluescreen, all the underwater ice is CG as well as the helicopter.
"Tornado" - Media Pro Magic Factory - 2010
Things I did: rigid body dynamics, shading and texturing for the wrecked car; Camera animation and environment camera map projections.
TV Ident that I worked on. 120 shots executed in about one month of work.
Just finished work on Yimou Zhang's latest movie. Part of our work is present in this trailer. Enjoy.
odd_FillLight is a High Dynimc Range and Low Dynamic Range tonemapping tool for Nuke v6.x. Its based on scotopic and photopic tonemapping algorithms that are meant to bring images closer to how the human vision system perceives light . It uses image geometry and luminance density to compress or expand luminance ranges.
-lights up shadowed areas (dose not remove shadows)
-values are limited within usable ranges (you can not over process an image, as some tonemapped images out there on the internet)
- homogenizes surfaces (good for removing some shading from pictures intended for texture painting)
-brightens up your footage to see more features for tracking
-tonemaps HDR sky domes for your sky replacement shots
- simple controls
-works on 8bit, 16bit, 32 bit (HDR, photometric HDR, EXR, non clamped EXR) images.
Hello internet. I present to you the results produced by this little gizmo I started about 1.5 years ago.
It's an attempt at reducing heavy distortions produced by rolling shutter. It uses Nuke's Camera motion vectors to correct the image. The only requirement is a camera node that has been previously tracked.
I'll post more information when I'll find the necessary time. If there are people out there that find this interesting, please drop a comment.