Jordan Westhoff

Jordan Westhoff's Blog


Leave a comment

Disk Speed Testing on Linux

Hey all,

In the past I’ve spent a bit of time talking about the merits of RAID and other high speed disk setups. Several years ago I also published posts about dual drive setups and did some surgery on my MacBook Pro in an effort to scrap the optical drive in favor of adding an SSD. (Psst, the old article is still available here)

All in all, the overall speed of a computer can be attributed to the combination of all of its internal parts working together to get more work done in less time. This is especially true of high performance machines, servers and drive array machines. While a wonderful utility, known as the BlackMagic Disk Speed Test, is available for Windows and OSX, it isn’t available for Linux. While I am sure there are a plethora of GUI disk speed utilities for Linux, there is one that I’m particularly drawn to, due to its simplicity and ease of use from the terminal. Since Linux is focused on being minimalist in the pursuit of performance, it only makes sense that installing a whole new utility just for spin testing is a bit wasteful.

As a result, I’ve written a basic script that does a pretty accurate disk speed test via the command line. The utility should work with all flavors of linux, I have been using it and deploying it across my fleet, all of which are either running Debian, ARCH or centOS 6.5.

The script is far from complex, merely taking a user input of how large a block of info to write to the disk and then it both writes and reads that size block of info and takes a time measurement. It is, however, pretty handy and works as fast as your drives can spin. Here, you can see the output of the script. I ran it with an argument of 2048MB across a single Western Digital VelociRaptop 15K RPM drive in one of my servers here in the rack.

WD Raptor Disk Speed Test

Not too shabby for a single 15K drive!

The code isn’t proprietary, you are free to use it how you like as an easy sysAdmin tool and it is easily modified to work however you please. Enjoy!

 

 

Advertisements


Leave a comment

Senior Project Update 7/9/2014 – New Servers, Testing and Accelerated Deployment

Hey all,

Summer has been awesome here in Rochester so far, I’ve gotten a lot done in different regards to my project!

Ultimately, I’m still in the stage of hardware and software testing, while conducting studies on 4K formats and compression schemes. All of this is valuable in appraising how much computational power I need to conduct all of the operations that are required to get the 4K footage processed as quickly as possible. Since I last posted, I’ve gotten some additional hardware to use and test, both virtually and locally. Here’s the round up of some stuff:

For the record, ALL of my Senior Project Updates can be found here, on my Senior Project Page

Last Post:

Last post, I was comparing smaller, underpowered machines to massive computing desktops to see what the differences were. They were, well, humongous. It turns out the small Micro-ITX board and setup I was using is indeed too slow for any kind of operations work. Hence, I re-purposed it for something that is different, but still useful: a netflix box!

The Alienware machine is a powerhouse even though it is still pretty old. Once I stocked it with a powerful GPU and added a bit more ram (went from 2 to 18 GB of RAM). Right now it is conducting CPU vs GPU testing as well as being used as a primary gaming machine in the evenings when I get back from work on campus. Overall, the Alienware definitely did win the battle between light and power efficient vs power hungry and high performance.

New Information:

Okay here’s all the new stuff that I promised. Recently, the school granted me two more physical server machines for use on the project. Both are 64 bit SuperMicro 2U servers, both are taking advantage of AMD’s Opteron processing technology, which I have to admit, is awesome. Both machines are powered by dual quad core CPU’s and 64GB’s of RAM.

10455181_10204248995689261_6823904756187854118_n

One of my new SuperMicro machines waiting to be racked with two of my other, older Dell units.

One of the new devices is posted in the photo, it’s the machine on top and there is a second one that is identical but I already had it racked at the time the photo was taken. As you can see, both have considerably larger drive quantities. Each unit allows me to store 8 drives, and currently both have 15K raptor drives included which is awesome! 15K server drives, which I have RAIDed (laymans term for working in tandem to increase speed) is allowing me to exceed a standard hard drive read and write speed by a factor of 3! This will be invaluable for parsing and spreading out frames for my project across the cluster. Right now, each of the drives is writing at a ballpark of 82 MB/s and reading at a rate of about 260 – 280 MB/s. This is excellent because for the system I am building, read speed are far more important than write speeds for these two units. Write speeds will increase as I RAID the devices.

On top of this, I have been developing a lot of the skeleton dev software for my project. The first stage of this has been individually configuring each server since I haven’t decided to go with a major software solution like Puppet, Salt or Ansible since I’m not sure that all of the configuration time is worth the slight performance boost I would get during only the configuration phase of each server. As a result, I’ve written a full suite of scripts that kick into effect once cent is installed on each machine. I decided to go with CentOS since it focuses on enterprise support, security and longevity (the current cent dystro is supported for 7 years). Once an OS is installed, each machine can run totally autonomously once it connects to my authentication and has all of the account info it needs. The machines install all of the necessary programs and services, in addition to syncing other repositories and cloning them locally . Once each of them is all setup, it notifies me via log that it is ready to join the cluster and processing can begin. As I begin to amass more and more hardware, local and virtual, easier deployment of each unit is increasingly more important because once the semester begins again, it will be very difficult to gather extra time to set up more efficient configurations and whatnot.

In the next week or so, I should be getting access to more hardware, I also have a lot of cool code to share with you all; most of it is linux based deployment, disk testing and a variety of others as well. Look for that in my Git and other repos, hosted here!