Month: September 2018


Designing the School of Business

I have been using CAD to make different models and designs since I was in high school.  It’s so satisfying to make different parts in a program that you can then bring to life with 3D Printing. In the OPIM Innovation Space, several 3D printers have some really special capabilities and I wanted to put my skills as a designer, and the abilities of the printer, to the test.

The Makerbot Z18 is easily one of the largest consumer grade printers available.  It can print within a 18 by 12 by 12 inch build volume. That’s one and a half square feet!  I challenged myself to build a model of UConn’s School of Business and then 3D print it to the largest size possible.

I started on Google maps and traced out the School of Business onto a template. Then I walked outside the school and took pictures of its notable features.  It took several days for me to capture the details of the building, such as cutting out windows and creating the overhanging roof, in order to make the building an accurate model.  I even hollowed out the model so that it could accomodate a microcontroller or LEDs if we wanted to use some IoT technology to upgrade the project.

Printing the behemoth of a project was a challenge.  The entire design printed on its side so that it could use nearly all of the Z18’s build volume, and even at full efficiency it was estimated to take 114 hours to print. I contemplated cutting it into two pieces and printing them separately, but it would be so much cooler to use the full size of the printer. It took several tries before I was able to print the School of Business in one shot. After several broken filament rolls and failed prints, the entire school was finished.

This project gave me great insight into the manufacturing problems faced by using 3D printing technology to produce exceedingly large parts. This model used about 3 pounds of filament and really pushed the limits of the technology.  A miniature School of Business was not only a great showcase for the OPIM Department and for OPIM Innovate, but it was a testament to the future of technology. Maybe in the future buildings will actually be 3D printed. It will be super exciting to see how this technology, and the CAD softwares that compliment it, evolve moving forward.

By: Evan Gentile, Senior, MEM Major

 


Learning Facial Recognition

Nowadays programs made by companies like Apple and Facebook are capable of making software that can unlock your phone or tag you in a photo automatically using facial recognition. Facial recognition first attracted me because of the incredible ability it had to translate a few lines of code into a recognition that mimics the human eye. How can this technology be helpful in the world of business? My first thought was in cyber security. Although I merely had one semester, I wanted to be able to use video to recognize faces. My first attempt used a raspberry pi as the central controller. Raspberry pis are portable, affordable, and familiar because they are used throughout the OPIM Innovation Space.  There is also a camera attachment which I thought made it perfect for my project. After realizing that the program I attempted to develop used too much memory, I moved into using Python, a programming language, on my own laptop.

Installing Python was a challenge in itself, and I was writing code in a language I had never used before. Eventually I was able to get basic scripts to execute on my computer, and then I could install and connect Python to the OpenCV library.  OpenCV is a database that used stored facial recognition data to quickly run face recognition programs in Python. The setup was really difficult; if a single file location was not exactly specified, or had moved by accident, then the entire library of openCV wouldn’t be accessible from Python. It was these programs that taught me how to take live video feed and greyscale it, so the computer wasn’t overloaded with information. Then I identified uniform perimeters of nearby objects in the video, traced them with the program using a rectangular, and used database files developed by INTEL to recognize if those objects were the face of a person.  The final product was really amazing. The Python script ran by using OpenCV to reference a large .xml file which held the data on how to identify a face. The program refreshed real time, and as I looked into my live feed camera I watched the rectangular box follow around the contour of my face. I was amazed by how much effort and such complex code it took to carry out a seemingly simple task. Based on my experience with the project, I doubt that Skynet or any other sentient robots are going to be smart enough to take over the world anytime soon. In the meantime, I can’t wait to see how computer vision technology will make the world a better place to live in.

By: Evan Gentile, Senior MEM Major

 


Innovate Taking Steps Towards Splunk

After taking OPIM 3222 I learned about this really cool big data platform called Splunk. I learned that the big selling point of Splunk is that it can take any machine data and restructure it so that it is actually useful. So after I got a Fitbit Charge HR my first thought was, “I wonder what we can find if we Splunk this.” I worked with Professor Ryan O’Connor on this project, and after several hiccups, we finally got a working add-on. Back when we first started this project we found a “Fitbit Add-On” in Splunkbase that we just had to install and then we would be ready to go. After spending a lot of time trying to get this add-on set up we learned that it was a bit outdated and had some bugs that were making it quite difficult to use. After a while this project got pushed off to the side as we worked on other IoT related projects in the OPIM Gladstein Lab.

By the time another spark of inspiration came along, the Splunk add-on was gone because of its age and bugs. Since the add-on was gone we had to take matters into our own hands. Professor O’Connor and I split the work so that I would work on using the Fitbit API to pull data and he would then work on putting it into Splunk. I wrote Python scripts to collect data on steps, sleep, and heart rate levels. We then found that the Fitbit API required OAuth2 authentication every few hours to be able to continuously pull data. Professor O’Connor already tackled a similar issue when making his Nest add-on for Splunk. He used a Lambda function from Amazon Web Services to process this OAuth2 request. We decided to use the same function for the Fitbit, but the major difference is that the function is called every few hours. Professor O’Connor then made a simple interface for users to get their API tokens and setup the add-on in minutes. We then took a look at all of the data we had and decided the best thing that we could make was a step challenge. We invited our friends and family to track their steps with us and create a dashboard to summarize activity levels, create an all-time leaderboard, and visualize steps taken over the course of the day. However, this app only scratches the surface of what can be done with Fitbit data. The possibilities are endless from both a business and research perspective. We have already gotten a lot of support from the Splunk developer community and we are excited to see what people can do with this add-on.

By: Tyler LaurettiSenior MIS Major


Introducing Alexa to Splunk

Before the introduction of Apple’s “Siri” in 2010, Artificial Intelligence voice assistants were no more than science fiction. Fast forward to today, and you will find them everywhere from in your phone helping you navigate your contacts and calendar, to in your home helping you around the house. Each smart assistant has its pros and cons, and everyone has their favorite assistant. Over the last few years I have really enjoyed working with Amazon’s Alexa smart assistant. I began working with Alexa during my summer internship at Travelers in 2016. I attended a “build night” after work where we learned how to start developing with Amazon Web Services and the Alexa platform. Since then, I’ve developed six different skills and received Amazon hoodies, t-shirts, and Echo Dots for my work.


So I mentioned “skills”, but I didn’t really explain them. An Alexa “skill” is like an app on your phone. These skills use the natural language processing and other capabilities of the Amazon “Lex” platform to do whatever you can think of. Some of the skills I have made in the past include a guide to West Hartford, a UConn fact generator, and a quiz to determine if you should order pizza or wings. However, while working in the OPIM Innovate lab I have found some other uses for Alexa that go beyond novelties. The first was using Alexa to query our Splunk instance. Lucky for us, the Splunk developer community has already done a lot of the leg work for us. The “Talk to Splunk with Amazon Alexa” Splunk add-on handles most of the networking between the Alexa and your Splunk instance. In order for Alexa Splunk to securely communicate we had to set up a private key, a self-signed certificate, and a Java keystore. After doing a few basic configuration steps on the Splunk side, you can start creating your Splunk Alexa Skill. This skill will be configured to talk to your splunk instance, but it is up to you to determine what queries to run. You can create “Intents” that will take an english phrase and convert that to a splunk query that you write. However, you can also use this framework to make Alexa do things like play sounds or say anything you want. For example, I used the Splunk skill we created to do an “interview” with Alexa about Bank of America’s business strategy for my management class. Below you can find links to the Alexa Add-On for Splunk as well as a video of that “interview”.

By: Tyler LaurettiSenior MIS Major

 

Talk to Splunk with Amazon Alexa: https://splunkbase.splunk.com/app/3298/

Alexa Talks Strategy

https://www.youtube.com/watch?v=UYaV8ybYV04