Nikolai Kozak

About
Contact Me

Multidisciplinary artist and wooden boatbuilder, 
interested in memory and interfaces.

Former partner at 
Bede-Fazekas & Kozak in Budapest, Hungary

Currently attending ITP @ NYU.

Seeing Machines




Building a Shitty Stereocam


The goal here was to see if I could (as cheaply as possible), build a stereo camera using tiny adafruit breakout cameras.



The camera mount was built using acrylic and some quick 3d-printing. The distance between the cameras is adjustable, but I’m unsure as to how much it actually does.

An OpenFrameworks application serves as a Syphon Server, transferring camera input into a Python application. I forked a repo called `stereodemo`, by Nicolas Burrus, which compiled a series of machine-learning based stereo depth estimation models, and allows you to switch between them.



I modified Stereodemo so that it reads its images from the OpenFrameworks SpyhonServer, as opposed to reading a static set of sample images. This way, we can feed images directly from our small stereo camera, as well as activate a “live mode” which is quite slow, but works best with the “live” model that’s included in stereodemo.



Some models work better than the others, though none gives us great results. I need to look more into calibrating the multiple cameras in order to achieve better results with the models (given that these expect rather precise, next-to-next configurations).

Running the App


Clone the `stereodemo` Python repo: https://github.com/nikokozak/stereodemo
Clone the `stereodemo_of` OpenFrameworks repo: https://github.com/nikokozak/stereodemo_of

Move the `stereodemo_of` folder into your `myApps` OF folder and build/run it.

Once the OF app is running and receiving input, start the python app by running `./reinstall.sh` in the app folder, and then run it `python -m stereodemo --syphon`.

Avoid picking the Monocular model, given it crashes the application (it expects one input instead of two, which we provide by default).





Final Project - Live video masking


This project is an attempt to closely coordinate information being generated in an OpenFrameworks sketch with a mobile screen in the real world. With some luck, this connection will make it seem like the “real” screen is revealing portions of what is happening in OF, and thus create the illusion of an invisible, larger frame of footage sitting in space around the screen.


Inspiration










What it will be


While I’m unsure as to the specifics, the gist of the project is this: 

Archival footage heavily featuring human faces is analyzed, and human faces are detected/framed as blobs. At each moment, one face is given “priority”. This area of the footage is cropped at a pre-determined aspect ratio and sent out to a small, 3.5” LCD screen. This LCD screen sits on the arm of a modified pen-plotter, which has been turned upright so that the screen can move up, down, left and right in space. As the faces in the footage move, or new faces appear, the cropped “face buffer” moves as well - these coordinates are transmitted to the plotter, so that the screen is moved to a new location matching the on-screen crop-rectangle. 

By doing so, we create the illusion that there is an invisible canvas of footage behind the small mobile screen being revealed every time this screen moves.

The faces will be tracked in real time using Haar Cascades. 


Equipment


1 x Arduino Uno (important for 5vLL).
1 x 3.5” HDMI Raspberry Pi LCD (doubles as external monitor)
1 x USB-C to USB-C cable and charger (for powering the LCD)
1 x Pen Plotter Kit
2 x NEMA 17 Steppers (upgraded from kit ones to handle extra torque reqs)
2 x Microsteppers (proper ones)




Update - Apr 28




So - the project got off to a rocky start, but is now moving smoothly.

It turns out that re-inventing a control system for a Core-XY style plotter with new steppers and drivers is fairly involved. Having an Arduino Uno saved me from having to implement a 5v logic-level jig (transistors, separate power source, etc.) which would’ve been necessary with newer Arduinos.

Wiring the steppers/drivers was also tricky - as per usual no manuals were included with the steppers or the drivers, making finding which wires corresponded to which coils a bit of a guessing exercise.



Once set up, it was time to begin implementing a control system in code. Core-XY plotters require synchronous movement from the two motors in order to achieve X or Y movement; therefore, setting up movement functions is a bit more involved than simply saying “move motor 1” or “move motor 2”.



It also became immediately evident that developing a callibration routine was necessary, in order to find the limits of the system before executing moves (otherwise there’s no way of knowing where anything is, given that the steppers have no sort of internal memory). Initially I made it so that it was a manual system, which meant after every restart or code fix I had to manually step the motors into their limit positions in order to define a coordinate system.



I realized this would gobble up an insane amount of time, and instead decided to implement very rudimentary limit-switches in order to automate the callibration routine. After some trial-and-error, the plotter can now run an automated callibration routine, and leave the plotter’s head at 0,0 (which is a virtual coordinate mapped onto any arbitrary size we feed it, like 1920x1080). 



Next, I started setting up a basic OpenFrameworks sketch that would take the mouse coordinates and feed it to the Arduino via serial - it sends “gX,Y” every second, which matches our Arduino’s command language (e.g. “g200,400”). Every time the Serial connection is reset, the plotter runs a new callibration routine, given that Serial interrupts automatically reset the Arduino.



I added longer footage of an AlleyCat bike race, and created a little square that represented a crop from the footage. I used the boilerplate code from our class on Frame Buffers to create a new window in my smaller, second display.



Now, moving my mouse both moves the crop, which updates the footage in my screen, and also moves the plotter head accordingly.

Video of working system

Next Steps


The movement isn’t particularly smooth at the moment. We’re sending  coordinates every second to the Arduino - we now need feedback from the Arduino indicating where the plotter head actually is, while it’s moving into position. This will result in a smoother transition that is position-aware, and will make it so that we keep the illusion of the “reveal” working.