جمعه

research statement

I just put these here to indicate the differences

This was my original research statement

Two major challenges facing modern machine learning and signal processing are scaling up standard techniques to handle excessive amount of data, and feature extraction and dimensionality reduction. As a second year graduate student at computer science department, I am very interested in devising new approaches and algorithms for solving above problems. Particularly, I am interested in devising new general purpose or application dependent algorithms, with strong mathematical supports for improving
real applications.

One of my main research interests is compressed sensing, a novel data acquisition framework with strong mathematical background and lots of progressing applications. In the research on compressed sensing, based on my theoretical background and experiences in collaborating with other people, we could introduce one efficient compressed sensing algorithms using expander graphs and one class of deterministic sensing matrices. We also showed that
compressed learning, learning in the measurement domain, is possible.

Although compressed sensing is my main research interest, I had also research on other massive learning, feature extraction methods. We devised a new algorithm for scaling up support vector machines using confidence rated boosting, which solves the inefficiency of SVM training for massive datasets. I am also a member of the Van Gogh team which aims to do signal processing and machine learning for the task of style analysis.

I found AT&T labs an outstanding place for continuing my research. My talk in
January, ensured me that AT&T provides an excellent combination of theoretical and applied researchers, and hence provides a unique opportunity for me to exploit the collaboration with them and continue my research, with strong foundation, and on real applications.

And this is the final edited version

Personal Statement
I am a second year graduate student in Computer Science at Princeton
University advised by Rob Calderbank and Rob Schapire.

I am interested in the emerging field of compressed sensing which aims to
capture attributes of a sparse signal using very few measurements. As a
computer science graduate student, I am particularly interested in developing low complexity reconstruction algorithms. I am also curious about how to take
advantage of the parsimonious signal representation that compressed sensing
provides. I would like to understand to what extent feature recognition is possible in the measurement domain. In the opposite direction, I would like to understand if there are questions about the signal that cannot be addressed by examining samples and require explicit reconstruction of the signal.

Compressed sensing comes in two flavors. In the work of Candes and Tao and
of Donoho, randomness enters into the definition of the sensing matrix. They
prove that with high probability such a matrix acts like a near isometry on all
sparse signals, and they showed that reconstruction was possible using linear
programming. Their approach treats all sparse signals as equally likely, in
contrast to mainstream signal processing where the filtering is deterministic, and the signal is described probabilistically. In the mainstream framework the sensing matrix is deterministic and it is required to act as a near-isometry on k-sparse vectors with high probability. I have found weak conditions that are sufficient to show that a deterministic sensing matrix satisfies this Statistical Restricted Isometry Prorty (STRIP). This framework is very general and includes any sensing matrix for which the columns are obtained by exponentiating codewords in a linear code.

I have also worked on random sensing matrices that are the adjacency matrices
of expander graphs. I looked at how these graphs are used to define codes by
Sipser and Spielman and recognized that I could develop a very simple
reconstruction algorithm for k-sparse signals that terminates in at most 2k rounds.

I like to compute. Last fall I worked with Ingrid Daubechies as part of the Van
Gogh Team developing machine learning algorithms to distinguish authentic Van Gogh paintings from those by painters he admired and tried to imitate. I have also developed a new algorithm for scaling up support vector machines using confidence rated boosting, which solves the inefficiency of SVM training for massive datasets.

I am interested in visiting AT&T because I like the combination of theoretical
excellence and real problems.


I sucked :D

هیچ نظری موجود نیست:

ارسال یک نظر