Projects


A Wrench for Stanley Black & Decker

One of the most used features of the OPIM Innovate lab is the Fused Deposition Modeling (FDM) 3D printer. We are always interested in new innovations in 3D printing and what students like to use the 3D printers for. We were lucky to have a chance to show a big name company upcoming technologies we had in the space.

Stanley Black & Decker’s executive personnel came to UConn for a networking event with the OPIM department. I had been tasked with 3D printing a momento to give to the Stanley team. As a manufacturer of tools and hardware, I thought it would be appropriate to print some kind of tool for them to take back with them. Doing some research I had discovered a functional wrench model that we could print as one piece.

Once I printed them I found that they were not always functional. I modified the wrench features and diameters to get what you see below using Tinkercad and Makerbot Desktop. Over the course of several weeks I experimented with several wrench sizes and materials. Our main criteria to improve the design was to reduce print time and while ensuring they were still functional. I found that wrenches that were made of ABS and approximately the size of the blue wrench seen below was the best design.When I decided to reprint, I decided to brand them in order to make them more personal.

In the end the Stanley team was intrigued by what we were doing with the space and the 3D printing technologies I had shown. It was a good professional experience to speak with them and present what I had learned during my academic career.

By: Nathan Hom


Management 4900 Explores the Future of Virtual Reality

It is no understatement to say that the invention of the smartphone has completely altered the way that we look at business and communication. After 10 years with this device, it is hard to imagine a world without it. However, this is not the first major technology that has changed our world forever. In the last 100 years we saw the invention of the radio, the television, the personal computer, and the internet. With the rate of adoption of new technology continuously increasing, it is safe to say that the next disruption may be right around the corner. That is why in Professor David Noble’s MGMT 4900 class he emphasized the importance of emerging technology in business strategy.

The focus of MGMT 4900: Strategy, Policy and Planning is around exploring the various functional areas of an enterprise to understand how to develop business strategy for future growth. In the Information Age, one of the most important components of an effective business strategy is technology. Professor Noble encouraged us to think about this when analyzing existing companies. He wanted us to think about what technology is in use today and what technology needs to be used in order to stay competitive in the future. Professor Noble believes that one of the most important disruptive technologies will be Virtual Reality. Having worked in the OPIM Innovate lab with various virtual reality devices I offered to give a demonstration for the class

Virtual Reality or VR is the simulation of a digital environment that the user is able to interact with through sight, sound, and motion. The concept of VR can be traced back all the way to 1939 with the release of the View Master. In the 1980s new consumer and industrial technologies began to develop that looked similar to modern VR headsets, but they were either far too expensive or poorly designed. Thanks to advances in computing technology, VR saw a resurgence in the last decade that made it more worthwhile for businesses to invest in.

It is one thing to just talk about this technology in a business context, but it is another thing to get hands on experience. Emerging technology like virtual reality can often be financially unobtainable for the average college student. Having a lab right in the business school dedicated to hands on exposure to emerging tech is something special that not many schools have. For our class we decided to focus on two types of virtual reality technology; Google Cardboard and HTC Vive. The Google Cardboard is an entry level VR headset that is powered by smart phone. The headset itself can be purchased from various retailers or self-built. We focused our discussion around how the New York Times is reinventing how it tells stories through the use of VR. We also explored applications like Google Cardboard Camera that lets the user create their own VR experiences. This is the style of VR that most people will be able to have access to. We then explored the HTC Vive which is a more immersive VR experience that includes motion controls. This headset is significantly more expensive and needs a powerful computer to run most applications. However, the quality of VR experience is much better than the basic Google Cardboard. We focused our discussion around education tools like BodyVR  that takes you on a journey through the human body, and Everest that lets you experience an expedition from the base camp to the summit. The HTC Vive allows for a more interactive and detailed experience and is what we imagine the future state of VR will look like. Both the HTC Vive and Google Cardboard are changing the way we imagine digital content and experiences.

Although many see VR as just a toy for gaming it is so much more than that. Every year the price of headsets decreases allowing more developers to create new experiences. Companies from every sector are looking at how virtual reality can be used to improve employee training, operational efficiency, and the customer experience. As more content is created, more users will adopt this technology, and ultimately change the way we consume digital content going forward.

By: Tyler Lauretti

 


Applications of VR in Mind Body Research

Mind Body Health Group:

The UConn Mind Body Health Group will partner with the OPIM lab to use virtual reality and other related technology as potential interventions for various psychological and physical health disorders.  A presentation will be scheduled for this academic year open to those interested in mind body health research and applications of innovative technology.

Melissa Bray, Phd – Professor of School Psychology

Dissertation Research:

The OPIM lab is partnering with Johanna deLeyer-Tiarks, a School Psychology doctoral student under the advisement of Dr. Melissa Bray on her dissertation research. This study will investigate the usefulness of Virtual Reality technology as a delivery model for Self-Modeling, a practice rooted in Bandura’s theory of observational learning. It is theorized that VR technology will facilitate an immersive Self-Modeling experience which may promote stronger gains than traditional Self-Modeling approaches. A research project is being developed to investigate this theory through the applications of Virtual Reality Self-Modeling to individuals who stutter. The research will culminate in a VR intervention to treat clinical levels of chronic stuttering.

This project is in its early stages of development. IRB materials are not yet approved.

Johanna deLeyer-Tiarks,  – Phd Student, School Psychology


Designing the School of Business

I have been using CAD to make different models and designs since I was in high school.  It’s so satisfying to make different parts in a program that you can then bring to life with 3D Printing. In the OPIM Innovation Space, several 3D printers have some really special capabilities and I wanted to put my skills as a designer, and the abilities of the printer, to the test.

The Makerbot Z18 is easily one of the largest consumer grade printers available.  It can print within a 18 by 12 by 12 inch build volume. That’s one and a half square feet!  I challenged myself to build a model of UConn’s School of Business and then 3D print it to the largest size possible.

I started on Google maps and traced out the School of Business onto a template. Then I walked outside the school and took pictures of its notable features.  It took several days for me to capture the details of the building, such as cutting out windows and creating the overhanging roof, in order to make the building an accurate model.  I even hollowed out the model so that it could accomodate a microcontroller or LEDs if we wanted to use some IoT technology to upgrade the project.

Printing the behemoth of a project was a challenge.  The entire design printed on its side so that it could use nearly all of the Z18’s build volume, and even at full efficiency it was estimated to take 114 hours to print. I contemplated cutting it into two pieces and printing them separately, but it would be so much cooler to use the full size of the printer. It took several tries before I was able to print the School of Business in one shot. After several broken filament rolls and failed prints, the entire school was finished.

This project gave me great insight into the manufacturing problems faced by using 3D printing technology to produce exceedingly large parts. This model used about 3 pounds of filament and really pushed the limits of the technology.  A miniature School of Business was not only a great showcase for the OPIM Department and for OPIM Innovate, but it was a testament to the future of technology. Maybe in the future buildings will actually be 3D printed. It will be super exciting to see how this technology, and the CAD softwares that compliment it, evolve moving forward.

By: Evan Gentile, Senior, MEM Major

 


Learning Facial Recognition

Nowadays programs made by companies like Apple and Facebook are capable of making software that can unlock your phone or tag you in a photo automatically using facial recognition. Facial recognition first attracted me because of the incredible ability it had to translate a few lines of code into a recognition that mimics the human eye. How can this technology be helpful in the world of business? My first thought was in cyber security. Although I merely had one semester, I wanted to be able to use video to recognize faces. My first attempt used a raspberry pi as the central controller. Raspberry pis are portable, affordable, and familiar because they are used throughout the OPIM Innovation Space.  There is also a camera attachment which I thought made it perfect for my project. After realizing that the program I attempted to develop used too much memory, I moved into using Python, a programming language, on my own laptop.

Installing Python was a challenge in itself, and I was writing code in a language I had never used before. Eventually I was able to get basic scripts to execute on my computer, and then I could install and connect Python to the OpenCV library.  OpenCV is a database that used stored facial recognition data to quickly run face recognition programs in Python. The setup was really difficult; if a single file location was not exactly specified, or had moved by accident, then the entire library of openCV wouldn’t be accessible from Python. It was these programs that taught me how to take live video feed and greyscale it, so the computer wasn’t overloaded with information. Then I identified uniform perimeters of nearby objects in the video, traced them with the program using a rectangular, and used database files developed by INTEL to recognize if those objects were the face of a person.  The final product was really amazing. The Python script ran by using OpenCV to reference a large .xml file which held the data on how to identify a face. The program refreshed real time, and as I looked into my live feed camera I watched the rectangular box follow around the contour of my face. I was amazed by how much effort and such complex code it took to carry out a seemingly simple task. Based on my experience with the project, I doubt that Skynet or any other sentient robots are going to be smart enough to take over the world anytime soon. In the meantime, I can’t wait to see how computer vision technology will make the world a better place to live in.

By: Evan Gentile, Senior MEM Major

 


Innovate Taking Steps Towards Splunk

After taking OPIM 3222 I learned about this really cool big data platform called Splunk. I learned that the big selling point of Splunk is that it can take any machine data and restructure it so that it is actually useful. So after I got a Fitbit Charge HR my first thought was, “I wonder what we can find if we Splunk this.” I worked with Professor Ryan O’Connor on this project, and after several hiccups, we finally got a working add-on. Back when we first started this project we found a “Fitbit Add-On” in Splunkbase that we just had to install and then we would be ready to go. After spending a lot of time trying to get this add-on set up we learned that it was a bit outdated and had some bugs that were making it quite difficult to use. After a while this project got pushed off to the side as we worked on other IoT related projects in the OPIM Gladstein Lab.

By the time another spark of inspiration came along, the Splunk add-on was gone because of its age and bugs. Since the add-on was gone we had to take matters into our own hands. Professor O’Connor and I split the work so that I would work on using the Fitbit API to pull data and he would then work on putting it into Splunk. I wrote Python scripts to collect data on steps, sleep, and heart rate levels. We then found that the Fitbit API required OAuth2 authentication every few hours to be able to continuously pull data. Professor O’Connor already tackled a similar issue when making his Nest add-on for Splunk. He used a Lambda function from Amazon Web Services to process this OAuth2 request. We decided to use the same function for the Fitbit, but the major difference is that the function is called every few hours. Professor O’Connor then made a simple interface for users to get their API tokens and setup the add-on in minutes. We then took a look at all of the data we had and decided the best thing that we could make was a step challenge. We invited our friends and family to track their steps with us and create a dashboard to summarize activity levels, create an all-time leaderboard, and visualize steps taken over the course of the day. However, this app only scratches the surface of what can be done with Fitbit data. The possibilities are endless from both a business and research perspective. We have already gotten a lot of support from the Splunk developer community and we are excited to see what people can do with this add-on.

By: Tyler LaurettiSenior MIS Major


Introducing Alexa to Splunk

Before the introduction of Apple’s “Siri” in 2010, Artificial Intelligence voice assistants were no more than science fiction. Fast forward to today, and you will find them everywhere from in your phone helping you navigate your contacts and calendar, to in your home helping you around the house. Each smart assistant has its pros and cons, and everyone has their favorite assistant. Over the last few years I have really enjoyed working with Amazon’s Alexa smart assistant. I began working with Alexa during my summer internship at Travelers in 2016. I attended a “build night” after work where we learned how to start developing with Amazon Web Services and the Alexa platform. Since then, I’ve developed six different skills and received Amazon hoodies, t-shirts, and Echo Dots for my work.


So I mentioned “skills”, but I didn’t really explain them. An Alexa “skill” is like an app on your phone. These skills use the natural language processing and other capabilities of the Amazon “Lex” platform to do whatever you can think of. Some of the skills I have made in the past include a guide to West Hartford, a UConn fact generator, and a quiz to determine if you should order pizza or wings. However, while working in the OPIM Innovate lab I have found some other uses for Alexa that go beyond novelties. The first was using Alexa to query our Splunk instance. Lucky for us, the Splunk developer community has already done a lot of the leg work for us. The “Talk to Splunk with Amazon Alexa” Splunk add-on handles most of the networking between the Alexa and your Splunk instance. In order for Alexa Splunk to securely communicate we had to set up a private key, a self-signed certificate, and a Java keystore. After doing a few basic configuration steps on the Splunk side, you can start creating your Splunk Alexa Skill. This skill will be configured to talk to your splunk instance, but it is up to you to determine what queries to run. You can create “Intents” that will take an english phrase and convert that to a splunk query that you write. However, you can also use this framework to make Alexa do things like play sounds or say anything you want. For example, I used the Splunk skill we created to do an “interview” with Alexa about Bank of America’s business strategy for my management class. Below you can find links to the Alexa Add-On for Splunk as well as a video of that “interview”.

By: Tyler LaurettiSenior MIS Major

 

Talk to Splunk with Amazon Alexa: https://splunkbase.splunk.com/app/3298/

Alexa Talks Strategy

https://www.youtube.com/watch?v=UYaV8ybYV04