Kunjal Panchal
A novel multimodal fusion model to classify 6 classes of emotions from visual, verbal, and vocal features of a person’s speech in CMU-MOSEI dataset.
I have used the ExtraSensory data set. This is a human activity recognition data set containing data from 60 individuals. The task that I will focus on, is probabilistic activity forecasting. Given a sub-dataframe consisting of between one and 30 consecutive observations for a single individual and a timestamp value t, the objective is to predict the (log) probability that each of the five labels (label:LYING_DOWN, label:SITTING, label:FIX_walking, label:TALKING, label:OR_standing) is active (e.g., takes the value 1) at the future specified time, t.
In digital cameras the red, blue, and green sensors are interlaced in the Bayer pattern. The missing values are interpolated to obtain a full color image. In this project, I implemented several interpolation algorithms.
This repository contains the code and information from the Create-A-Thon portion of the MIT FutureMakers program