Jordan Westhoff

Jordan Westhoff's Blog


Leave a comment

Disk Speed Testing on Linux

Hey all,

In the past I’ve spent a bit of time talking about the merits of RAID and other high speed disk setups. Several years ago I also published posts about dual drive setups and did some surgery on my MacBook Pro in an effort to scrap the optical drive in favor of adding an SSD. (Psst, the old article is still available here)

All in all, the overall speed of a computer can be attributed to the combination of all of its internal parts working together to get more work done in less time. This is especially true of high performance machines, servers and drive array machines. While a wonderful utility, known as the BlackMagic Disk Speed Test, is available for Windows and OSX, it isn’t available for Linux. While I am sure there are a plethora of GUI disk speed utilities for Linux, there is one that I’m particularly drawn to, due to its simplicity and ease of use from the terminal. Since Linux is focused on being minimalist in the pursuit of performance, it only makes sense that installing a whole new utility just for spin testing is a bit wasteful.

As a result, I’ve written a basic script that does a pretty accurate disk speed test via the command line. The utility should work with all flavors of linux, I have been using it and deploying it across my fleet, all of which are either running Debian, ARCH or centOS 6.5.

The script is far from complex, merely taking a user input of how large a block of info to write to the disk and then it both writes and reads that size block of info and takes a time measurement. It is, however, pretty handy and works as fast as your drives can spin. Here, you can see the output of the script. I ran it with an argument of 2048MB across a single Western Digital VelociRaptop 15K RPM drive in one of my servers here in the rack.

WD Raptor Disk Speed Test

Not too shabby for a single 15K drive!

The code isn’t proprietary, you are free to use it how you like as an easy sysAdmin tool and it is easily modified to work however you please. Enjoy!

 

 

Advertisements


Leave a comment

Senior Project Update 7/9/2014 – New Servers, Testing and Accelerated Deployment

Hey all,

Summer has been awesome here in Rochester so far, I’ve gotten a lot done in different regards to my project!

Ultimately, I’m still in the stage of hardware and software testing, while conducting studies on 4K formats and compression schemes. All of this is valuable in appraising how much computational power I need to conduct all of the operations that are required to get the 4K footage processed as quickly as possible. Since I last posted, I’ve gotten some additional hardware to use and test, both virtually and locally. Here’s the round up of some stuff:

For the record, ALL of my Senior Project Updates can be found here, on my Senior Project Page

Last Post:

Last post, I was comparing smaller, underpowered machines to massive computing desktops to see what the differences were. They were, well, humongous. It turns out the small Micro-ITX board and setup I was using is indeed too slow for any kind of operations work. Hence, I re-purposed it for something that is different, but still useful: a netflix box!

The Alienware machine is a powerhouse even though it is still pretty old. Once I stocked it with a powerful GPU and added a bit more ram (went from 2 to 18 GB of RAM). Right now it is conducting CPU vs GPU testing as well as being used as a primary gaming machine in the evenings when I get back from work on campus. Overall, the Alienware definitely did win the battle between light and power efficient vs power hungry and high performance.

New Information:

Okay here’s all the new stuff that I promised. Recently, the school granted me two more physical server machines for use on the project. Both are 64 bit SuperMicro 2U servers, both are taking advantage of AMD’s Opteron processing technology, which I have to admit, is awesome. Both machines are powered by dual quad core CPU’s and 64GB’s of RAM.

10455181_10204248995689261_6823904756187854118_n

One of my new SuperMicro machines waiting to be racked with two of my other, older Dell units.

One of the new devices is posted in the photo, it’s the machine on top and there is a second one that is identical but I already had it racked at the time the photo was taken. As you can see, both have considerably larger drive quantities. Each unit allows me to store 8 drives, and currently both have 15K raptor drives included which is awesome! 15K server drives, which I have RAIDed (laymans term for working in tandem to increase speed) is allowing me to exceed a standard hard drive read and write speed by a factor of 3! This will be invaluable for parsing and spreading out frames for my project across the cluster. Right now, each of the drives is writing at a ballpark of 82 MB/s and reading at a rate of about 260 – 280 MB/s. This is excellent because for the system I am building, read speed are far more important than write speeds for these two units. Write speeds will increase as I RAID the devices.

On top of this, I have been developing a lot of the skeleton dev software for my project. The first stage of this has been individually configuring each server since I haven’t decided to go with a major software solution like Puppet, Salt or Ansible since I’m not sure that all of the configuration time is worth the slight performance boost I would get during only the configuration phase of each server. As a result, I’ve written a full suite of scripts that kick into effect once cent is installed on each machine. I decided to go with CentOS since it focuses on enterprise support, security and longevity (the current cent dystro is supported for 7 years). Once an OS is installed, each machine can run totally autonomously once it connects to my authentication and has all of the account info it needs. The machines install all of the necessary programs and services, in addition to syncing other repositories and cloning them locally . Once each of them is all setup, it notifies me via log that it is ready to join the cluster and processing can begin. As I begin to amass more and more hardware, local and virtual, easier deployment of each unit is increasingly more important because once the semester begins again, it will be very difficult to gather extra time to set up more efficient configurations and whatnot.

In the next week or so, I should be getting access to more hardware, I also have a lot of cool code to share with you all; most of it is linux based deployment, disk testing and a variety of others as well. Look for that in my Git and other repos, hosted here!

 

 


Leave a comment

SHOOTOUT! Data Usage Statistics!

Hey guys!

Over the weekend I spent a good deal of time looking at just how intensive a full RAW data workflow can be. I also wanted to compare the burden of 4K RAW vs Arri’s 2.8K RAW via a S.Two recording device and see which required the most data overhead to work with. This allows us to simply look at how much drive space is required and discount the physical CPU usage of the project since pretty much all of my machines were running at almost full bore whenever renders were required.

While storage is not really a problem for a lot of industry professionals, it can be quite the burden for independents or students. Not every student has several terabytes (a terabyte is 1,024 gigabytes) of unused, high speed storage. A lot of people wonder if they van get away with slower, basic desktop drives for data of this proportion but it really comes down to how long you want to wait. Slow drives serve information, well, slowly. Waiting for 300GB of renders to load can take ages and when deadlines are at stake, it really isn’t viable.

Below, I’ve compiled a good deal of raw statistics from our recent shootout project. Since I was in charge of managing the data, image processing and running the servers we worked off of, I have the entirety of the raw footage as well as a significant portion of the renders. This accounts for tons and tons of space, enough space in fact that I thought comprehensive statistics might be helpful to visualize where all of that information is going.

There are a couple foreword things to note though, before we get started. In some of the statistics, I pitted the total usage by camera which encompasses all of the information used, start to finish on each camera platform. In one or two other statistics, I broke up the information to reflect intermediate stages. For the Arri D-21, this required converting raw S.Two DPX files to .ARI files and then exporting them again from ARRI RAW Converter to DPX or .TIFF file sequences to color grade and then make a final export of. For the Sony there was simply taking the camera files from the onboard SD card and then grading and re-exporting. In 4K, however, there was significantly more to do. Dumping the card gave a nice, proprietary, MXF wrapper with all of the files which had to be opened with Sony RAW Viewer in order to convert them to 16-bit DPX files. These could then be graded and exported again to a DPX or TIFF sequence to be imported for analysis and editing. Each of these reflect storage as you can imagine, and it presents quite a trend in the statistics.

Screenshot

This accounts for all of the ‘mission critical’ information stored for the shootout.

Here, we can see just how much data there was overall. In total, the final aggregate size of all resources exceeded 1.6TB! This included all footage, start to finish, ARRI, Sony and Sony 4K as well as renders, CC passes, graphics for our final video and any other data in between. Keep in mind that for the actual camera footage (which comprised a significant portion of the overall data used, but more on that later) totaled only about 10 minutes per camera (and less for the Sony 4K). This is because most of the shots used in the shootout were of charts or color scenes – the longest scenes were barely over 50 seconds apiece. Therefore, shooting an entire film on any of these platforms would consume an incredible amount of data. Broke up above are four different categories and each are perhaps a bit vague so I’ll take a moment to explain them.

The first, and the largest is the Footage Archive. This is an aggregate gathering of just the base footage captured from each camera. This also incorporates some intermediate files in the case of the ARI – essentially all of the footage classified here was footage that was ready to go into editing minus any major color correction. The Shootout Archive contains all of the intermediates of the pick scenes and the color corrected scenes. This means that any footage that was observed and chosen to be good enough for analysis went on to continue the chain of picture processing. The files contained in this directory are renders from the S.Two and then processed in ARRI’s ARC as well as the Sony HD and 4K clips that were chosen – those also underwent their respective processing steps as well. Shootout MAIN is the working directory for all of the analysis, as well as the video production portion of the project. Here are all of the final renders, color correction finals, stock footage, B-roll, preliminary video screening renders and narration as well as all of the graphics that our team generated as well. Finally, there is a Web Access Point directory. This was a separate directory created on a network server in order to provide each member of the team with fast, reliable intermediary storage for their own assets in production. These could be screen captures, editing files, project files, you name it. This is the working miscellaneous that helped make the workflow so efficient – each member had a fast directory to work from and then contribute to the final project being assembled in real time.

Each day of shooting generated different amounts of storage requirement based on scene.

Each day of shooting generated different amounts of storage requirement based on scene.

Since the shootout was spread over three (technically four, when you look at 4K) days, it was useful to look at how usage varied by day. Some of the graph information was cut off but the four largest portions were indicative of the longest shots. Day 1 files came close to taking the lead in storage but our Day 3 files took the lead with 19.6% of total data usage – these stats merely incorporate the files coming from the ARRI and the Sony in HD video mode. The third largest, at 17% was from our fourth day of shooting and this comprises all of the Sony 4K raw shoot files. Each of the much smaller portions is broken up by shot – some scenes took many shots and some took far less.

final_data

This shows a better look of how production workflow can impact your data needs for each project.

Here, this is a final, final look at how much information from each step of production comprises the total. This specific figure ties directly into the final, cleaned up and organized storage stats of the shootout in its entirety. Of the approximate 1.6TB required, the most costly stage of production was generating all of the intermediate files. This was especially true of the 4K tests which equaled almost half of this information despite shooting for only about %20 as long as the ARRI and Sony HD tests. Both RAW tests required multiple intermediate steps which chewed through tons of space because of each’s respective resolution. We chose to work with DPX and TIFF’s since those are lossless formats and overall exhibited the best quality.

All in all, shooting RAW is a very exhausting process, both from a processing and storage perspective. Your storage needs will be dependent on the camera and the codec/format you choose to edit in but it’s always safe to budget one to two terabytes for shooting a short and always, always remember to BACKUP your information! All of the statistics here leave out the backups that were set in place to safeguard our information. At any one point, our information was backed up in two additional places – one in a hardware RAID attached to a workstation on another end of campus and a full minute-to-minute backup stored on a NAS. This NAS also pulled all of the web assets from each member in order to keep their assets online and safe at the same time. Feel free to contact me if you have questions as well!

In the future I’ll be making a post dedicated to the labyrinth of storage and why different types are better than others, as well as a look into what I’m using to manage all of this information! Thanks for reading!


Leave a comment

Behind the Scenes: Shooting Sony 4K RAW

Recently, as part of the MPS Shootout we just finished, my shootout team and I had a great opportunity to shoot with some interesting Sony hardware since our main objective was to shoot and compare the RAW cinema capabilities of the Arri D-21 and the Sony NEX-FS700.

Natively, the Sony FS700 can’t shoot 4K. However, with a gracious software update from SONY that was implemented and installed by the RIT SoFA cage, the feature is unlocked. While the sensor and the camera on board hardware can handle the capture of 4K, the camera itself has no reliable method to record it. Without any hardware upgrade, the Sony FS700 only employs an SD card slot, which is not fast enough, nor high enough capacity to begin to think about recording 4K content. Hence, enter the Sony AXS-R5 +  HXR-IFR5 4K bundle. The school didn’t have these units available, but with a grant we were able to rent the equipment for a night in order to conduct our tests.

The most expensive hard drive toaster you ever will buy (for now until…8K?)

It was actually a pretty difficult feat getting our paws on this particular setup. The physical recording unit, the AXS-R5 is built and engineered for Sony’s PMW-F5 and Cine-Alta F55 cameras – not natively meant for the NEX-FS line. SONY solved this problem by engineering an “interface” unit – the HXR-IFR5. This unit takes in the 4K signal over an SDI cable and then pushes the signal to the recorder to be saved. Overall, the two units together cost just over $10,500 and that doesn’t include mounting, storage or other accessories. For our test, we used a single 512gb SSD, also manufactured by SONY, and it really did the trick! As a result of the difficulty in acquiring the devices, we couldn’t shoot for all of our test days but a small rental company out of Tennessee, pulled through for us! Enter LensRentals.com! With the unit acquired, I could then proceed to unbox it and start recording!

Initial Vanguard package containing our SONY gear.

Initial Vanguard package containing our SONY gear.

All of our SONY gear nestled inside of its shipping case.

All of our SONY gear nestled inside of its shipping case.

All unboxed and joined together - just need a camera!

All unboxed and joined together – just need a camera!

 

After the unit was unboxed, we were able to test it our in an actual scene! We proceeded to setup SOFA’s Studio B for our tests which gave us plenty of space to work, as well as plenty of lights, tables, and surfaces to set up our gear and mount our wall test targets. We shot a variety of scenes, mostly charts, but also we got a couple more shots featuring aesthetic objects as well for style.

Studio B setup for 4K RAW

Studio B setup for 4K RAW

 

This was our go-to setup. The camera (SONY FS700) was linked to the onboard SD media and the 4K unit via SDI which was also being monitored via the onboard signal feed. Since our 4K and HD were the same aspect ratio, the framing did not change, which meant we could safely use the Panasonic HD monitor to see what the camera was seeing from the DIT station. On set we had an Apple MacBook Pro to monitor files once they were recorded and ingested. All in all, the setup was far less complicated than some other setups, like the ARRI D-21 setup which was a spaghetti nightmare.

S.Two recording setup for the Arri D-21

S.Two recording setup for the Arri D-21

Mostly, all of our testing went well. We were able to gather all of the shots we wanted and several others. One snag did occur though, and I think that it is best described by the beautifully composed SnapChat that one of my partners, Carly Cerquone, sent to detail the issue.

Yup, that's right. We made the ol' rookie mistake.

Yup, that’s right. We made the ol’ rookie mistake.

In the end though, the project was a ton of fun and myself and the entire team learned a whole lot about the process of shooting and working with 4K. It is significantly different (and far more time consuming) than any other workflow currently around and you can find all of our findings and video information at the Shootout page on my blog here as well. Thanks for reading! As one final note, we decided to engineer our own dolly for pure creativity’s sake to capture the opening scene of the MPS SHOOTOUT Video – here was our super innovative approach. Below are some other photos from on set as well.

Carly and David, rocking the homemade dolly

Carly and David, rocking the homemade dolly

Carly, David and Matt lighting the scene

Carly, David and Matt lighting the scene

Matt lighting while some old B-Roll transfers

Matt lighting while some old B-Roll transfers


Leave a comment

Updates for the Summer!

Hey all,

Well, the semester is finally over which means like the academic semester, my junior year at RIT is complete! It was certainly a busy one! I have a lot of really cool summer plans, which I’ll post here but I wanted to recap the year a little bit and I’ll be spending the next week or two uploading a lot of thework myself and my teammates accomplished over the semester.

For the summer, I’ll be returning to RIT to work as a member of our research computing department. Originally, I had investigated traveling to LA after receiving job offers from SONY and IMAX but after some thinking I decided to stick around ROC and take an innovative position with RIT RC since it catered a bit more to my interests and offered some really valuable opportunities to learn about open source and parallel computing. There I will be maintaining and working to make research advances on 4K video streaming over IP as well as a variety of other tasks that tie into parallel computing, open source computing, global teleconferencing and open source global video delivery to  large tiled displays!

Also, this semester we finished the third year MPS shootout – a deeply analytical camera comparison test designed to pit two cinema grade systems against each other in order to determine which system is better for upperclassmen RIT School of Film and Animation students to produce films on based on a variety of factors. My role for the project was primarily to oversee DIT and technician work as well as programming and analysis. In the next week or so, our final public video will be posted with our results in a video delivery format for easy synopsis of our project. Our team was responsible for the most sophisticated video systems, the Sony FS700 and the Arri Arriflex D-21 and tasked with comparing their RAW workflows.

Standard Viewpoint

Over the course of the semester this was a very common way to find me – peering over the lid of my laptop at any given time.

 

Additionally, my senior thesis project was approved which means research for that will begin and continue throughout the summer, I’ll be posting a lot of updates here (Senior Project Page) with some translations to English as well (not just engineering speak!). As I reach checkpoints and make progress, I’ll make it a point to update this page so any interested parties can follow along!

As always, thanks for reading and look for more content in the coming days – I’m home now and I’ve begun to catch up on some much needed sleep so work should be updated soon!

 

-Jordan


Leave a comment

Sikki Sakka – Color Grading

Earlier in the year, a very kind artist living in Rochester reached out to me via my page and contacted me to do the color grading for an indie music video that he and several other artists had banded together to make. The song, titled “Ngi Dem” (translation: I Remember) is a track dedicated to the loved ones of the artists who died in their home continent of Africa. The band of artists goes by Sikki Sakka, also assisted by Bachir Kane.

The music video was shot on the Red One on site in Africa, and the majority of the footage was very impressive. Tasked with color correcting the film, I set out to do the best I could.

Screenshot 2014-02-06 18.28.36

 

I chose to use Final Cut Pro and Apple Color to do the grading of the clip, although I am currently upgrading my home systems to use Adobe and DaVinci since I have far more computational power for Windows/UNIX packages. It was a very interesting color grading experience, some white balance had been mis-set in the shooting, some shots were over or under exposed and some preliminary editing was already done. Along with this, some quick Final Cut Pro 7 Three Color Corrector presets had been applied as well. While this made some shots difficult, it was still very enjoyable and boy, this video was shot with sound in mind. The physical audio talent of the members of Sikki Sakka was incredible, and the audio mastering was done very professionally. This was particularly important because over the course of the color grade, I probably saw this feature about 200 times! That’s a really painful experience if the audio is awful! (Thankfully it was great – and had lots of bass) so it really boiled down to a pleasant experience in the edit bay. Here are some neat before and after screen caps of one of the lead rappers!

Screenshot 2014-02-06 18.37.11

Before Apple Color pass

 

Screenshot 2014-02-06 18.37.12

After Apple Color pass!

After the project was done, I was told the final render was to be sent off to Africa where it would broadcast across various countries! Hopefully Sikki Sakka is doing well and their music is continuing to spread, best of luck to you guys!


Leave a comment

FreeNAS, woo!

Hello all,

The last couple weeks have been busy but I’ve made significant progress on a lot of the projects I posted about last and figured I’d share some of the gory details from the tech spehere.

First off, the FreeNAS server is finally up and running. FreeNAS is a freely distributed version of the FreeBSD operation built specifically for engineering a personal or enterprise level Network Attached Storage server (NAS). Since I have a solid array of computers and Dropbox is very limiting, I figured that with the insane bandwidth RIT offers student, it would be an infinitely valuable addition to my computing fleet.

FreeNAS Boot Screen

The welcome screen of FreeBSD based FreeNAS

I decided to repurpose a sever that I was running Windows Server 2012 Datacenter Edition on. Since FreeNAS is best run using the ZFS file system which requires 64 bit, I decided to utilize one of the blade servers from my rack – a Dell CS24-SC Cloud Server header. Dual Intel Xeon quad core processors will handle all of the work, and it has enough RAM (16 gigabytes) to handle all of the serving very capably. Installed are 6 Terabytes of information (2x2tb drives and 2x1tb drives) which should be plenty of storage to host all of my common files.

Since I am using all three major operating systems to access these files (OS X, Windows and Linux) I decided to also set up a variety of shares from within the freeNAS setup. This includes a Samba/CIFS share for Windows machines to access from (one of my personal laptops and two of my other servers), an AFP (Apple File Protocol) share for my personal workstation MacBook Pro and all of the school lab machines and a couple office machines, as well as a NFS share for Linux access.

On top of this, I decided to enable SSH and FTP/SFTP access so access from pretty much any device is guaranteed! A week or so ago I migrated all of my common data over to the server once it was running and I tested the validity of the drives. Let me tell you, moving 5.5 terabytes of information takes a loooooong time! Most of the slowdown was the result of moving files over LAN rather than something like FireWire, Thunderbolt or USB 3.0, however it still clocked along at a very good pace since my apartment is very well connected – RIT gives each of us several gigabit (10/100/1000) access jacks in each apartment and the network is full fiber! This makes accessing the files remotely very, very handy since the speeds support even the most intensive tasks (even streaming Blu-Ray content from computer to computer on campus!).

My final goal for the project is to open up a drive on the server to use for various Apple Time Machine backups, I’ll keep you all posted, thanks for stopping by!