Author: Carina Zamudio


AI Recognition

Artificial Intelligence is an interesting field that has become more and more integral to our daily lives. Its applications can be seen from facial recognition, recommendation systems, and automated client services. Such tasks make life a lot simpler, but are quite complex in and of themselves. Nonetheless, these tasks rely on machine learning, which is how computers develop ways to recognize patterns. These patterns require loads of data however so that computers can be accurate. Luckily in this day and age, there is a variety of data to train from and a variety of problems to tackle.

For this project, I wanted to use AI to show how it can integrate with other innovative technologies in the lab. I went with object detection because its applications in engineering rely on micro-controllers, data analytics, and the internet of things. For example, a self driving car needs to be able to tell where the road is or if a person is up ahead.  A task such as this simply wouldn’t function well without each component. Looking into the future, we will need more innovative ways to identify things whether it be for a car or for security surveillance.

 

When I first started working in the Innovate Lab, I saw the LCD plate in one of the cabinet drawers and wanted to know how it worked. I was fascinated with its potential and decided to focus my project around the plate. The project itself was to have a Raspberry Pi recognize an object and output the label onto an LCD plate.  The first aspect, recognizing objects, wasn’t too hard to create. Online there are multiple models that are already trained for public use. These pre-trained models tell the computer the algorithm to recognize a certain object. The challenge came with the installation of all the programs required to use the model and to run the script. Afterwards, outputting the text onto the plate simply required me to wire the plate to the pi. The need for the plate was to show the results otherwise they’d have to plug in a monitor, but other alternatives could have been used as well. For example, the results could have been sent online to a web application or they could have been stored in a file on the pi. So far the results haven’t been always accurate, but that just leaves room for improvement. I am hoping that soon I will be able to run the detection script on data that has been streamed to the pi. Overall, I gained a better understanding of the applications of AI and engineering. This was only one of the many capabilities of AI and there is still so much more to try.

By: Robert McClardy


An Introduction to Industrial IoT

This past semester, I participated in OPIM 4895: An Introduction to Industrial IoT, a course that brings data analytics and Splunk technology to the University’s Spring Valley student farm. In this course, I learned how to deploy sensors and data analytics to monitor real-time conditions in the greenhouse in order to practice sustainable farming and aquaponics. The sensors tracked data for pH, oxygen levels, water temperature, and air temperature which was then analyzed through Splunk. At the greenhouse, we were able to visualize the results of this data in real time at the greenhouse using an augmented reality system with QR codes through Splunk technology. We were also able to monitor this data remotely through Apple TV Dashboards in the OPIM Innovate Lab on campus.

As a senior who is graduating this upcoming December, I appreciated the opportunity to have hands on experience working with emerging technology. Learning tangible skills is critical to students who plan on entering the workforce, especially in the world of technology. The Industrial IoT course has been one of my favorite courses of my undergraduate career as a student at UConn. I believe this is largely because it has significantly strengthened my technical skills through interactive learning, working closely with other students and faculty, and traveling on-site to the greenhouse. Using Splunk to analyze our own data that was produced by sensors that we deployed at the farm is a perfect example of experiential learning.

By: Radhika Kanaskar 


Immigration and Virtual Reality

Virtual Reality is a new set of technologies that has the ability to accomplish educational, business, and recreational feats never seen before. If you would have told somebody 50 years ago that they could climb and explore Mount Everest from the safety and comfort of their living room, they would not have believed it. As Mark Zuckerberg once said, “Virtual reality was once the dream of science fiction, but the internet was also once a dream, and so were computers and smartphones. The future is coming and we have a chance to build it together.” In OPIM Innovate, I recently had the opportunity to explore this past dream of science fiction with a Latino and Latin American Studies class: LLAS3998 – Human Rights on the US/Mexican Border: Narratives of Immigration.

The Lab was coordinated by Anne Gebelein, a UConn professor who wanted to expose her class to narratives in public policy including the United States border, immigration, and other Latin American topics. Gebelein had realized that the usage of a new educational medium like Virtual Reality could allow her class a broader and much more meaningful experience while learning these challenging topics. In working with the Innovate team, I was able to construct a curriculum containing learning materials for a specially designed VR workshop: Creating Empathy for Human Rights. We made use of the HTC Vive virtual reality system as well as a set of Google Cardboards to facilitate this event.

One of the experiences showcased was a virtual reality environment created by USA Today known as The Wall. This Vive-only application allowed students to explore various parts of the USA-Mexico border wall as it currently exists today. The vast size and land area covered by the border certainly surprised many students. In further exploring the topic of human rights, students also investigated a solitary confinement VR experience that placed them in the shoes of an immigration detainee. Several students commented on the gravity and sheer discomfort they felt when they entered this environment.

Overall, VR has certainly begun to prove itself a viable technology for a new style of practical teaching and learning. Throughout the course of this workshop, the LLAS class and Professor Gebelein were exposed to environments that they never could have entered through traditional teaching methods. I was very pleased that this workshop became a resounding success and we hope to further develop our materials and assist with more activities like this in the future.

By: Thomas Hannon

 


A Wrench for Stanley Black & Decker

One of the most used features of the OPIM Innovate lab is the Fused Deposition Modeling (FDM) 3D printer. We are always interested in new innovations in 3D printing and what students like to use the 3D printers for. We were lucky to have a chance to show a big name company upcoming technologies we had in the space.

Stanley Black & Decker’s executive personnel came to UConn for a networking event with the OPIM department. I had been tasked with 3D printing a momento to give to the Stanley team. As a manufacturer of tools and hardware, I thought it would be appropriate to print some kind of tool for them to take back with them. Doing some research I had discovered a functional wrench model that we could print as one piece.

Once I printed them I found that they were not always functional. I modified the wrench features and diameters to get what you see below using Tinkercad and Makerbot Desktop. Over the course of several weeks I experimented with several wrench sizes and materials. Our main criteria to improve the design was to reduce print time and while ensuring they were still functional. I found that wrenches that were made of ABS and approximately the size of the blue wrench seen below was the best design.When I decided to reprint, I decided to brand them in order to make them more personal.

In the end the Stanley team was intrigued by what we were doing with the space and the 3D printing technologies I had shown. It was a good professional experience to speak with them and present what I had learned during my academic career.

By: Nathan Hom


Management 4900 Explores the Future of Virtual Reality

It is no understatement to say that the invention of the smartphone has completely altered the way that we look at business and communication. After 10 years with this device, it is hard to imagine a world without it. However, this is not the first major technology that has changed our world forever. In the last 100 years we saw the invention of the radio, the television, the personal computer, and the internet. With the rate of adoption of new technology continuously increasing, it is safe to say that the next disruption may be right around the corner. That is why in Professor David Noble’s MGMT 4900 class he emphasized the importance of emerging technology in business strategy.

The focus of MGMT 4900: Strategy, Policy and Planning is around exploring the various functional areas of an enterprise to understand how to develop business strategy for future growth. In the Information Age, one of the most important components of an effective business strategy is technology. Professor Noble encouraged us to think about this when analyzing existing companies. He wanted us to think about what technology is in use today and what technology needs to be used in order to stay competitive in the future. Professor Noble believes that one of the most important disruptive technologies will be Virtual Reality. Having worked in the OPIM Innovate lab with various virtual reality devices I offered to give a demonstration for the class

Virtual Reality or VR is the simulation of a digital environment that the user is able to interact with through sight, sound, and motion. The concept of VR can be traced back all the way to 1939 with the release of the View Master. In the 1980s new consumer and industrial technologies began to develop that looked similar to modern VR headsets, but they were either far too expensive or poorly designed. Thanks to advances in computing technology, VR saw a resurgence in the last decade that made it more worthwhile for businesses to invest in.

It is one thing to just talk about this technology in a business context, but it is another thing to get hands on experience. Emerging technology like virtual reality can often be financially unobtainable for the average college student. Having a lab right in the business school dedicated to hands on exposure to emerging tech is something special that not many schools have. For our class we decided to focus on two types of virtual reality technology; Google Cardboard and HTC Vive. The Google Cardboard is an entry level VR headset that is powered by smart phone. The headset itself can be purchased from various retailers or self-built. We focused our discussion around how the New York Times is reinventing how it tells stories through the use of VR. We also explored applications like Google Cardboard Camera that lets the user create their own VR experiences. This is the style of VR that most people will be able to have access to. We then explored the HTC Vive which is a more immersive VR experience that includes motion controls. This headset is significantly more expensive and needs a powerful computer to run most applications. However, the quality of VR experience is much better than the basic Google Cardboard. We focused our discussion around education tools like BodyVR  that takes you on a journey through the human body, and Everest that lets you experience an expedition from the base camp to the summit. The HTC Vive allows for a more interactive and detailed experience and is what we imagine the future state of VR will look like. Both the HTC Vive and Google Cardboard are changing the way we imagine digital content and experiences.

Although many see VR as just a toy for gaming it is so much more than that. Every year the price of headsets decreases allowing more developers to create new experiences. Companies from every sector are looking at how virtual reality can be used to improve employee training, operational efficiency, and the customer experience. As more content is created, more users will adopt this technology, and ultimately change the way we consume digital content going forward.

By: Tyler Lauretti

 


Designing the School of Business

I have been using CAD to make different models and designs since I was in high school.  It’s so satisfying to make different parts in a program that you can then bring to life with 3D Printing. In the OPIM Innovation Space, several 3D printers have some really special capabilities and I wanted to put my skills as a designer, and the abilities of the printer, to the test.

The Makerbot Z18 is easily one of the largest consumer grade printers available.  It can print within a 18 by 12 by 12 inch build volume. That’s one and a half square feet!  I challenged myself to build a model of UConn’s School of Business and then 3D print it to the largest size possible.

I started on Google maps and traced out the School of Business onto a template. Then I walked outside the school and took pictures of its notable features.  It took several days for me to capture the details of the building, such as cutting out windows and creating the overhanging roof, in order to make the building an accurate model.  I even hollowed out the model so that it could accomodate a microcontroller or LEDs if we wanted to use some IoT technology to upgrade the project.

Printing the behemoth of a project was a challenge.  The entire design printed on its side so that it could use nearly all of the Z18’s build volume, and even at full efficiency it was estimated to take 114 hours to print. I contemplated cutting it into two pieces and printing them separately, but it would be so much cooler to use the full size of the printer. It took several tries before I was able to print the School of Business in one shot. After several broken filament rolls and failed prints, the entire school was finished.

This project gave me great insight into the manufacturing problems faced by using 3D printing technology to produce exceedingly large parts. This model used about 3 pounds of filament and really pushed the limits of the technology.  A miniature School of Business was not only a great showcase for the OPIM Department and for OPIM Innovate, but it was a testament to the future of technology. Maybe in the future buildings will actually be 3D printed. It will be super exciting to see how this technology, and the CAD softwares that compliment it, evolve moving forward.

By: Evan Gentile, Senior, MEM Major

 


Learning Facial Recognition

Nowadays programs made by companies like Apple and Facebook are capable of making software that can unlock your phone or tag you in a photo automatically using facial recognition. Facial recognition first attracted me because of the incredible ability it had to translate a few lines of code into a recognition that mimics the human eye. How can this technology be helpful in the world of business? My first thought was in cyber security. Although I merely had one semester, I wanted to be able to use video to recognize faces. My first attempt used a raspberry pi as the central controller. Raspberry pis are portable, affordable, and familiar because they are used throughout the OPIM Innovation Space.  There is also a camera attachment which I thought made it perfect for my project. After realizing that the program I attempted to develop used too much memory, I moved into using Python, a programming language, on my own laptop.

Installing Python was a challenge in itself, and I was writing code in a language I had never used before. Eventually I was able to get basic scripts to execute on my computer, and then I could install and connect Python to the OpenCV library.  OpenCV is a database that used stored facial recognition data to quickly run face recognition programs in Python. The setup was really difficult; if a single file location was not exactly specified, or had moved by accident, then the entire library of openCV wouldn’t be accessible from Python. It was these programs that taught me how to take live video feed and greyscale it, so the computer wasn’t overloaded with information. Then I identified uniform perimeters of nearby objects in the video, traced them with the program using a rectangular, and used database files developed by INTEL to recognize if those objects were the face of a person.  The final product was really amazing. The Python script ran by using OpenCV to reference a large .xml file which held the data on how to identify a face. The program refreshed real time, and as I looked into my live feed camera I watched the rectangular box follow around the contour of my face. I was amazed by how much effort and such complex code it took to carry out a seemingly simple task. Based on my experience with the project, I doubt that Skynet or any other sentient robots are going to be smart enough to take over the world anytime soon. In the meantime, I can’t wait to see how computer vision technology will make the world a better place to live in.

By: Evan Gentile, Senior MEM Major

 


Innovate Taking Steps Towards Splunk

After taking OPIM 3222 I learned about this really cool big data platform called Splunk. I learned that the big selling point of Splunk is that it can take any machine data and restructure it so that it is actually useful. So after I got a Fitbit Charge HR my first thought was, “I wonder what we can find if we Splunk this.” I worked with Professor Ryan O’Connor on this project, and after several hiccups, we finally got a working add-on. Back when we first started this project we found a “Fitbit Add-On” in Splunkbase that we just had to install and then we would be ready to go. After spending a lot of time trying to get this add-on set up we learned that it was a bit outdated and had some bugs that were making it quite difficult to use. After a while this project got pushed off to the side as we worked on other IoT related projects in the OPIM Gladstein Lab.

By the time another spark of inspiration came along, the Splunk add-on was gone because of its age and bugs. Since the add-on was gone we had to take matters into our own hands. Professor O’Connor and I split the work so that I would work on using the Fitbit API to pull data and he would then work on putting it into Splunk. I wrote Python scripts to collect data on steps, sleep, and heart rate levels. We then found that the Fitbit API required OAuth2 authentication every few hours to be able to continuously pull data. Professor O’Connor already tackled a similar issue when making his Nest add-on for Splunk. He used a Lambda function from Amazon Web Services to process this OAuth2 request. We decided to use the same function for the Fitbit, but the major difference is that the function is called every few hours. Professor O’Connor then made a simple interface for users to get their API tokens and setup the add-on in minutes. We then took a look at all of the data we had and decided the best thing that we could make was a step challenge. We invited our friends and family to track their steps with us and create a dashboard to summarize activity levels, create an all-time leaderboard, and visualize steps taken over the course of the day. However, this app only scratches the surface of what can be done with Fitbit data. The possibilities are endless from both a business and research perspective. We have already gotten a lot of support from the Splunk developer community and we are excited to see what people can do with this add-on.

By: Tyler LaurettiSenior MIS Major


Introducing Alexa to Splunk

Before the introduction of Apple’s “Siri” in 2010, Artificial Intelligence voice assistants were no more than science fiction. Fast forward to today, and you will find them everywhere from in your phone helping you navigate your contacts and calendar, to in your home helping you around the house. Each smart assistant has its pros and cons, and everyone has their favorite assistant. Over the last few years I have really enjoyed working with Amazon’s Alexa smart assistant. I began working with Alexa during my summer internship at Travelers in 2016. I attended a “build night” after work where we learned how to start developing with Amazon Web Services and the Alexa platform. Since then, I’ve developed six different skills and received Amazon hoodies, t-shirts, and Echo Dots for my work.


So I mentioned “skills”, but I didn’t really explain them. An Alexa “skill” is like an app on your phone. These skills use the natural language processing and other capabilities of the Amazon “Lex” platform to do whatever you can think of. Some of the skills I have made in the past include a guide to West Hartford, a UConn fact generator, and a quiz to determine if you should order pizza or wings. However, while working in the OPIM Innovate lab I have found some other uses for Alexa that go beyond novelties. The first was using Alexa to query our Splunk instance. Lucky for us, the Splunk developer community has already done a lot of the leg work for us. The “Talk to Splunk with Amazon Alexa” Splunk add-on handles most of the networking between the Alexa and your Splunk instance. In order for Alexa Splunk to securely communicate we had to set up a private key, a self-signed certificate, and a Java keystore. After doing a few basic configuration steps on the Splunk side, you can start creating your Splunk Alexa Skill. This skill will be configured to talk to your splunk instance, but it is up to you to determine what queries to run. You can create “Intents” that will take an english phrase and convert that to a splunk query that you write. However, you can also use this framework to make Alexa do things like play sounds or say anything you want. For example, I used the Splunk skill we created to do an “interview” with Alexa about Bank of America’s business strategy for my management class. Below you can find links to the Alexa Add-On for Splunk as well as a video of that “interview”.

By: Tyler LaurettiSenior MIS Major

 

Talk to Splunk with Amazon Alexa: https://splunkbase.splunk.com/app/3298/

Alexa Talks Strategy

https://www.youtube.com/watch?v=UYaV8ybYV04