PARTICIPANTS

 

Position: Postdoctoral Fellow

Current Institution: Virginia Tech

Abstract:

Driven by the increasing mobile data traffic, cellular networks are undergoing unprecedented paradigm shift in the way data is delivered to the mobile users. A key component of this shift is device-to-device (D2D) communication in which proximate devices can deliver content on demand to their nearby users, thus offloading traffic from often congested cellular networks. This is facilitated by the spatiotemporal correlation in the content demanded i.e., repeated requests for the same content from different users across various time instants. Storing the popular files at the “network edge”, such as in small cells, switching centers or handheld devices, termed caching, offers an excellent way to exploit this correlation in the content requested by the users. Cache-enabled D2D networks are attractive due to the possible linear increase of capacity with the number of devices that can locally cache data.

The performance of cache-enabled D2D networks fundamentally depends upon i) the locations of the devices, and ii) how is cache placed on these devices. For instance, consider the device-centric placement where the content is placed on a device close to the particular device that needs it. While this is certainly beneficial for the device with respect to which the content is placed, this may be highly sub-optimal if another device in the network wants to access the same content from the device on which it was cached. As a result, we focus on the cluster-centric placement, where the goal is to improve the collective performance of all the devices in the network measured in terms of the coverage probability and area spectral efficiency. In this talk, I would like to present a new comprehensive framework for the performance analysis of cache-enabled D2D networks under different classes of cluster-centric content placement policies in which the device locations are modeled by a Poisson cluster process. This model accurately captures the fact that the devices engaging in D2D communication will typically form small clusters. Finally, the talk is concluded with several thoughts on the topology of future D2D networks where the cellular and D2D networks coexist in the same band.

Bio:
Mehrnaz Afshang received her B.E. degree in Electrical Engineering from Shiraz University of Technology, Iran, in 2011 and her Ph.D. degree from Nanyang Technological University, Singapore, in 2016. During her Ph.D., she was a recipient of the SINGA Fellowship. Since January 2015, she has been a visiting student and later a postdoctoral fellow in the Bradley Department of Electrical and Computer Engineering at Virginia Tech working. Her research interests include communications theory, stochastic geometry, device-to-device networks, and wireless ad hoc and heterogeneous cellular networks. She has authored 16 technical papers. She also served as a reviewer for the IEEE Transactions on communications, wireless communications, information theory, mobile communications, and vehicular technology.

Position: Ph.D. student

Current Institution: New York University

Abstract:
Procedural Game Content Generation from Open Data

Users upload data to the Internet perpetually. From Wikipedia articles to online news, from Youtube videos to Twitter messages, the sheer diversity of available information is immense, reflecting the real world in 1’s and 0’s. Data games make use of freely available online data to automatically generate game content. In such games, the players should view, interact with and learn from the original data during gameplay. They provide novel and playful ways of understanding real world information that may otherwise be tedious, difficult or overwhelming, due to its complexity, size or even low entertainment value. For example, a user may play an action game where the map reflects his neighborhood, or a strategy game where attributes are based on a country’s demographic information. Thus, data games have the potential of serving as visualization tools for open data, while also providing new inspiration sources for content generation.

Transforming open data into game content is not trivial, for data in its raw form is unsuited for direct use in-game. Once data is acquired, the transformation process is divided into data selection and structural transformation. The former involves selecting data parts useful to content generation, while the latter includes adapting the selected data to fit the desired content. My research aims at expanding the concept of data games, exploring how different types of data can be transformed into different kinds of playable content with artificial intelligence, and how the original data is perceived by the player during gameplay. I developed a model for procedural content generation for data games, and aim at applying it in the development of various game prototypes. An initial prototype used geographical information for map generation in an open-source version of the classic strategy game Civilization (MPS Labs, 1991), while a prototype in development attempts at using Wikipedia articles to generate themed cards and decks for the game Hearthstone (Blizzard Entertainment, 2014). Currently, I am developing Data Adventures, a murder-mystery adventure game that uses Wikipedia articles to generate the whole game, including its plot, characters, dialogue, items, in-game locations and images. Although Data Adventures is a work-in-progress, it is fully playable, providing interaction with non-playable characters based on real people and locations based on real-world places.

Bio:
Gabriella A. B. Barros was awarded a scholarship from Science Without Borders and CAPES topursue her PhD, initially at the IT University of Copenhagen (Denmark). She is currently a PhDstudent at the Tandon School of Engineering of the New York University (US), advised by JulianTogelius. She holds a B.Sc. in Computer Science from the Federal University of Alagoas(Brazil), and a M.Sc. in Computer Science from the Federal University of Pernambuco (Brazil).Her main research focus is Data Games, which are games with procedurally generated contentbased upon open data. Additional interests are procedural content generation and artificialintelligence.

Position: Ph.D. candidate

Current Institution: Harvard University

Abstract:
Mutual Influence Potential Networks: Enabling Information Sharing in Loosely-Coupled Extended-Duration Teamwork

The teamwork in such complex collaborative activities as healthcare, co-authoring documents and developing software is often loosely coupled and extends in time. To remain coordinated and avoid conflicts, team members need to identify dependencies between their activities —which though loosely coupled may interact — and share information appropriately. The loose-coupling of tasks increases the difficulty of identifying dependencies, with the result that team members often lack important information or are overwhelmed by irrelevant information. My thesis formalizes a new multi-agent systems problem, Information Sharing in Loosely-Coupled Extended-Duration Teamwork (ISLET). It defines a new representation, Mutual Influence Potential Networks (MIP-Nets) and an algorithm, MIP-DOI, that uses this representation to determine the information that is most relevant to each team member. Importantly, the extended duration of the teamwork precludes team members from developing complete plans in advance. Thus, unlike prior work on information sharing in multi-agent systems, the MIP-Nets approach does not rely on a priori knowledge of a team’s possible plans. Instead, it models collaboration patterns and dependencies among people and their activities based on team-member interactions. Empirical evaluations show that this approach is able to learn collaboration patterns and identify relevant information to share with team members.

Bio:
Ofra Amir is a PhD candidate in the John Paulson School of Engineering and Applied Sciences at Harvard University; her adviser is Prof. Barbara Grosz. She holds a BSc and MSc in Information Systems Engineering, both from Ben-Gurion University. Ofra’s research combines
artificial intelligence algorithms with human-computer interaction methods, focusing on the development of intelligent algorithms and systems for supporting human teamwork. It was recognized by an honorable mention for best paper award at the SIGCHI Conference on Human Factors in Computing Systems (CHI’15) and will appear in IJCAI-2015. Ofra was a co-author of the paper that won second place in the challenges and visions track at the 2013 International Conference on Autonomous Agents and Multiagent Systems. Her project “GoalKeeper: Supporting Goal-Centered Coordinated Care” was chosen as a finalist in the 2014 CIMIT Primary Care Prize student competition, and she was a finalist for the Microsoft Research PhD fellowship. Ofra has also received several awards for her teaching at Harvard University and Ben-Gurion University. She Co-Chaired the 2015 AAAI Spring Symposium on Intelligent Systems for Supporting Distributed Human Teamwork.

Position: Ph.D. student

Current Institution: Arizona State University

Abstract:
Wrinkle Cellomics: Screening for Cancer Cells using an Ultra-Thin Silicone Membrane

Bladder cancer is the fifth most common in the United States, with the highest recurrence rate of any cancer. Therefore necessitating lifelong patient surveillance as often as every 3 months following initial treatment. Currently, there are two employed surveillance techniques: urine cytology—the microscopic examination of naturally exfoliated and expelled cells in urine—and cystoscopy—where a small probe with a camera is inserted into the bladder through the urethra for visual inspection of the bladder lining. Each technique has shortcomings for different types or grades of bladder cancer. Therefore, I developed a novel detection platform, which capitalizes on the inherent physical differences that distinguish cancerous from healthy cells. This platform is as noninvasive urine cytology, yet is highly sensitive and selective for cancer. Numerous studies have revealed the inherent physical differences between cancerous cells and their healthy counterparts; cancerous cells have consistently demonstrated greater flexibility, stretch-ability, and malleability. Several distinct methods have been employed to observe and discern these unique physical differences, unfortunately these methodologies collectively suffer from reliance on expensive complex equipment, highly specialized personnel, and very low-throughput of single-cells and are infeasible for patient diagnostic screening. To overcome these obstacles, I have developed a highly paralleled, high-throughput platform to simultaneously analyze all cells in a patient sample for the presence of cancer. The increased cellular malleability and traction forces of cancerous cells selectively deform this detection platform for rapid cancer diagnosis. The detection platform consists of an ultra-thin silicone membrane that is approximately 30 nm thick and floats upon liquid silicone. Cancerous and healthy cells adhere and spread upon this silicone membrane platform, however, cancerous cells exclusively exert sufficient force to deform the membrane. This cancer-specific membrane deformation is easily visualized as distinct membrane wrinkle patterns. Thus, this detection platform translates the inherent physical differences amongst cancerous and healthy cells into visual differences that are easily observed, even when cancerous cells are within a mixed cell population. I have successful employed this detection platform to preliminarily diagnosis bladder cancer from human patient urine samples.

Bio:
Jennie is a PhD student in Electrical Engineering at Arizona State University and received her B.E. from Auburn University. She is an NSF Graduate Research Fellow, an Ira A. Fulton Dean’s Fellow, and an ARCS Scholar. Her research interests include the application of MEMStechnology in a biological context, specifically for the diagnosis and treatment of humandiseases, and has been presented at the 27th and the 29th IEEE International Conference onMEMS. Jennie’s dissertation work is focused on the development of a diagnosis platform for theearly detection of bladder cancer and novel therapeutic microscale implants to treathydrocephalic fluid retention in the skull. Her long-term research goals are centered on thebetterment of human health and wellness through the mindful use of technology. Additionally,Jennie is active in her community, mentoring with Big Brothers Big Sisters and volunteering atthe local Science Center.

 

Position: Postdoc

Current Institution: MIT

Abstract:
Computer-aided classification of suspicious pigmented lesions using wide field of view images

Cutaneous melanoma is responsible for over 75% of skin cancer deaths. In 2016, an estimated 76,380 patients will be diagnosed with melanoma, and 10,150 patients are estimated to die of melanoma in the U.S. However, the prognosis is excellent for localized disease and primary tumors with a 5-year survival rate of more than 90%. For late tumor stage IV, the survival rate drops to 16.1% with a 20-fold increase in treatment costs. Hence, early detection is key to reducing melanoma mortality and lowering treatment costs. Currently, early detection of malignant lesions via thorough skin screening in a wide patient population is limited by dermatologist patient throughput. Primary care physicians (PCPs) on the other side see a large percentage of the general population in their daily practice. Our aim is to empower PCPs with a quick and easy to use screening tool to widen the access of skin analysis to a broader population while con-currently limiting the unnecessarily high referral rate from primary care physicians to specialists. Our technical approach is based on a computer-aided classification system which uses a powerful machine learning algorithm to analyze wide field of view images of the patient’s body to automatically distinguish suspicious from non-suspicious skin lesions.

Bio:
Judith Birkenfeld is a PhD from Germany with extensive experience in solving biomedical research problems. Judith received her M.Sc. degree in medical physics from the University of Heidelberg, Germany. For her research thesis she collaborated with the MGH Francis H. Burr Proton Beam Therapy Center in Boston, where she simulated radiation procedures to analyze the effects of different dose rates on cancer patients and their treatment. After graduating from Heidelberg with a diploma in Physics, she transitioned to the field of biomedical optics for her PhD thesis at the Institute of Optics (VioBio Lab) in Madrid, Spain. Judith investigated the effect of the crystalline lens’ gradient refractive index and its influence on optical aberration with age and accommodation. Her work has significant implications for the quantitative study of in vivo lenses and provides important insights into the mechanism of existing IOLs and for their possible future development. This set of research earned Judith her PhD in physics with honors from the Complutense University. In 2014 she was accepted as a catalyst fellow in the M+Visión program at the Massachusetts Institute of Technology, a highly competitive fellowship program designed to prepare scientists with advanced technical degrees for biomedical technology innovation leadership, tackle current unmet medical needs in the healthcare system, and develop patient-centric biomedical technologies. During her first year as a catalyst fellow, Judith co-created the Skin project. The goal of the project is to empower primary care physicians, who already have access to a large percentage of the patient population, with a tool to provide an effective skin screening for everybody and help diagnose melanoma while it is still highly treatable. The team has developed a computer aided classification system which works in combination with wide field of view images, allowing for rapid and objective referral tools for primary care physicians. Currently, Judith is an M+Visión Cofund/Marie Curie Action Fellow at MIT and Brigham and Women’s Hospital/HMS. In addition to being a founding key team member in Team Skin’s research in cancer care, she is also working on a project that seeks to monitor hydration in the elderly care setting as a major contributor in study design and experimental measurements. In the future, she hopes to continue to work within a strong collaborative network of scientists, clinicians and businesses in an environment that pushes bench-to-bedside innovation.

Email

Website

Position: Ph.D. Candidate

Current Institution: Princeton University

Abstract:
Sifting Through Massive Text Corpora to Detect and Characterize Historical Events

Significant events are characterized by interactions between entities (e.g., countries, organizations, individuals) that deviate from typical interaction patterns. Investigators, such as historians, commonly read large quantities of text to construct an accurate picture of who, what, when, and where an event happened. For example, the US National Archive has collected about two million diplomatic messages sent between 1973 and 1978; historians are interested in exploring the content of these messages. Unfortunately, this corpus is cluttered with diplomatic “business as usual” communications such as arrangements for visiting officials, recovery of lost or stolen passports, and obtaining lists of attendees for international meetings and conferences. But hidden in the corpus are indications of important diplomatic events, such as the fall of Saigon. These events, and the documents that portray them, are of primary interest to historians. My goal is to develop and apply a scalable method to help historians and political scientists sift through such document collections to find potentially important events and the primary sources describing them.

The principal goal of my research is to develop probabilistic models for understanding influences on human behavior; in this work, I develop a model for detecting and characterizing influential events in large collections of communication. Specifically, I have developed a structured topic model to distinguish between topics that describe “business-as-usual” and time-localized topics that deviate from these patterns. This approach successfully captures critical events and identifies documents of interest when applied to the US State Department diplomatic messages from the 1970s. The model identifies important time intervals and relevant documents for real-world events such as the Indonesian Invasion of East Timor at the end of 1975, the evacuation of Saigon and South Vietnam prior to the end of the Vietnam war, the Sinai Interim Agreement, the Apollo 17 lunar gifts to all nations, Operation Entebbe, and the death of Mao Tse-tung, among others. I have released source code for this method, which includes both an implementation of the machine learning algorithm to infer model parameters, along with tools to visualize and explore the model results. My longer term research goals include developing machine learning approaches to study how human behavior and decisions are influenced by events and interactions in many domains.

Bio:
Allison Chaney is a PhD candidate in the Computer Science department at Princeton University and is advised by Professor David Blei. Her primary research interest is to develop statistical machine learning methods for real-world human-centered applications; specifically, she develops Bayesian latent variable models to estimate human behavior and identify external factors that influence it. In addition to deriving and implementing scalable inference algorithms for these models, she builds visualization tools to assist domain experts in interpreting and exploring the model results.

Allison received a BA in Computer Science and a BS in Engineering from Swarthmore College in 2008, and has worked for Pixar Animation Studios and the Yorba Foundation for open-source software. She has also completed research internships with eBay/Hunch and Microsoft Research. This fall, she will begin postdoctoral research with Professors Barbara Engelhardt and Brandon Stewart to study how machine learning algorithms influence human behavior in the context of recommendation systems and how to account for these biases when training future algorithms. In 2014, Allison served as the Program Chair for the Women in Machine Learning (WiML) Workshop, and is now a member of the WiML board; she also engages in various academic mentoring efforts.

Position: Ph.D. student

Current Institution: Stanford University

Abstract:
Towards the Machine Comprehension of Text

Enabling a computer to understand a document so that it can answer comprehension questions is a central, yet unsolved goal of NLP. A key factor impeding its solution by machine learned systems is the limited availability of human-annotated data. Recently, researchers proposed to exploit the fact that the abundant news articles of CNN and Daily Mail are accompanied by bullet point summaries in order to heuristically create large-scale supervised training data for the reading comprehension tasks. My research aims to address the following two problems: 1) simple recurrent neural networks with attention mechanism are highly effective in solving such large but synthetic tasks; 2) the models trained on such datasets can be valuable to make real progress on machine comprehension.

Bio:

Danqi Chen is currently a Ph.D. candidate in the Computer Science Department of Stanford University, advised by Prof. Christopher Manning. Her main research interests lie in deep learning for natural language processing and understanding, and she is particularly interested in the intersection between text understanding and knowledge reasoning. She has been working on machine comprehension, knowledge base completion / population, dependency parsing and her works have been published in leading NLP/ML conferences. Prior to Stanford, she received her
BS from Tsinghua University in 2012. She has been awarded Microsoft Research Women’s Fellowship and also received many respectable awards in programming contests (IOI’08 gold medalist and ACM/ICPC WF’10 silver medalist).

Position: Research Scientist at Google; Research Affiliate at MIT, CSAIL

Current Institution: Google Inc.; MIT Computer Science and Artificial Intelligence Laboratory (CSAIL)

Abstract:
Exploring and Modifying Spatial Variations in a Single Image

Structures and objects, captured in image data, are often idealized by the viewer. For example, buildings may seem to be perfectly straight, or repeating structures such as corn’s kernels may
seem almost identical. However, in reality, such flawless behavior hardly exists. The goal in this line of work is to detect the spatial imperfection, i.e., departure of objects from their idealized models, given only a single image as input, and to render a new image in which the deviations from the model are either reduced or magnified. Reducing the imperfections allows us to idealize/beautify images, and can be used as a graphic tool for creating more visually pleasing images. Alternatively, increasing the spatial irregularities allow us to reveal useful and surprising
information that is hard to visually perceive by the naked eye (such as the sagging of a house’s roof). I will consider this problem under two distinct definitions of idealized model: (i) ideal parametric geometries (e.g., line segments, circles), which can be automatically detected in the input image. (ii) perfect repetitions of structures, which relies on the redundancy of patches in a single image. Each of these models has lead to a new algorithm with a wide range of applications in civil engineering, astronomy, design, and materials defects inspection.

Bio:
Tali Dekel has recently joined Google as a Research Scientist, working on developing computer vision and computer graphics algorithms. Before Google, she was a Postdoctoral Associate at the Computer Science and Artificial Intelligence Lab (CSAIL) at MIT, working with Prof. William T. Freeman. Tali completed her Ph.D studies at the school of electrical engineering, Tel-Aviv University, under the supervision of Prof. Shai Avidan, and Prof. Yael Moses. Tali’s Ph.D. focused on the use of multi-camera systems to solve classic and innovative tasks in computer vision and computer graphics including 3D structure and 3D motion estimation, content-geometry aware stereo retargeting, and photo sequencing (recovering temporal order of distributed image set). In her postdoc studies, she has been working on developing new algorithms that detect and visualize imperfections/irregularities in a single image. Her research interests include computer vision and graphics, geometry, 3D reconstruction, motion analysis, -and image visualization.

Position: Postdoctoral Fellow

Current Institution: University of Toronto

Abstract:

Software Product Lines allow developers to take advantage of similar products or systems that share a common core and differ in a set of features (units of functionality). Therefore, a product can be defined by the core and the set of features it contains.

Two main directions to model and analyze software product lines are the annotative and the compositional approach. In the annotative approach, all possible features are represented in a succinct model (called 150% representation) and a particular product is obtained by removing any element of the model belonging to a feature that does not belong to that product. In the compositional approach, the product is obtained by composing the core and the desired features.

The advantage of annotative approaches is that they allow analyzing all products efficiently, while compositional approaches are better for analyzing feature interactions. An example of an undesired feature interaction in a telephony system is that of having both Call Forwarding and Call waiting. Then, when the line is busy, it is not clear how the system will behave.

There are, in addition, several works combining the approaches so that features are defined modularly and then composed to build a 150% representation of the system. This allows both reasoning about the features modularly and about every possible product by taking advantage of all their shared behavior. However, a unique composition operator is given, and any additional constraints related to where or how the feature should be composed in order to avoid interactions are tangled with the behavior definition of the feature. This prevents feature reuse across different systems. In our work we reason about different composition choices as first-class entities that allow more precise reasoning. In this talk, we describe several composition operators and show how to use them to detect feature interactions.

Bio:
I am a postdoctoral fellow at the Department of Computer Science at the University of Toronto, working with Prof. Marsha Chechik. My main research interests are in the use of compositional techniques for software analysis, including modular specifications, verification, interaction detection, etc.

I completed my Ph.D. in Computer Science under the supervision of Prof. Shmuel Katz at the Department of Computer Science, Technion, Israel. My dissertation was on “Compositional Verification of Events and Responses”.

I have studied at the Department of Computer Science, Universidad de Buenos Aires, Argentina where I have obtained a Licentiate in Computer Science degree. My thesis was titled “The equivalence between FO(IFP) and the class B” under the supervision of Rafael Grimson and Guillermo Martínez.

Email

Website

Position: Ph.D. Candidate

Current Institution: Stanford University

Abstract:
Integration of Thin Film Magnetoelectric Composites for Voltage-Tunable Devices

Operation of today’s electronics is controlled by voltage and electric fields, not magnet fields. As a result, identifying methods for electrical control of magnetic devices has been a vibrant research topic in recent years. Magnetoelectric composites, combining piezoelectric materials and magnetostrictive materials, offer a unique and intriguing solution. In these composites, a voltage applied to the piezoelectric film causes both it and the adjacent magnetic film to strain. The magnetization of the magnetic material then responds to the strain accordingly and changes the operating state of the device. Bulk magnetoelectric composites have demonstrated large tunability of magnetic properties, magnetization rotation, or uniaxial magnetization switching under applied voltages. However, in order for these capabilities to be incorporated into electronic systems, magnetoelectric composites must be made into thin-film form and integrated with other silicon devices.

In this research work, we demonstrate integrated thin-film magnetoelectric composite resonant waveguide devices. The various materials and design considerations for producing these magnetoelectric devices will be discussed. Numerous materials were considered, however, in the end, the P96N4Z20T80 piezoelectric and Co43Fe43B14 magnetostrictive materials were selected for their high levels of strain control and linearity. In addition, the composite structure and electrode design trade-offs were simulated and optimized to produce maximal magnetoelectric coupling given the constraints of material thickness and the limitation of substrate clamping. Top and bottom interdigitated electrodes of the same voltage polarity were used to produce the most uniform in-plane, uniaxial tensile strain in the composite. Finally, tunability results for the resonant waveguides will be presented, illustrating the voltage control of the magnetization and relative permeability of the material in its thin-film form.

Bio:
Amal El-Ghazaly recently completed her Ph.D. in Electrical Engineering from Stanford University under the direction of Professor Shan X. Wang and is currently pursuing postdoctoral research at the University of California Berkeley with Professor Jeffrey Bokor. Her postdoctoral work delves into understanding the mechanisms for ultrafast switching of magnetic dots both optically and electronically, and integrating these dots into a digital logic system. During her Ph.D., she was awarded the NSF graduate research fellowship, NDSEG research fellowship, and the Stanford DARE fellowship. Her doctoral work focused on the design and optimization of magnetic and magnetoelectric material composites for radio frequency devices. In the first part of her Ph.D., she demonstrated GHz-frequency range magnetic inductors and continued on to develop the first ever fully-integrated tunable RF waveguide resonator using thin film magnetoelectric composites. Dr. El-Ghazaly holds her Master’s and Bachelor’s degrees from
Carnegie Mellon University.

Position: Postdoctoral Fellow 

Current Institution: Princeton University

 

Position: Postdoctoral Research Associate

Abstract:
Studying the Great Firewall of China: From Internet Filtering to Actively Probing Anti-Censorship Tools

Almost 20 years ago, the Chinese government initiated legislation to regulate the Internet in Mainland China, resulting in the birth of a national firewall known as the Great Firewall of China (GFW). In the past couple of years, the operational development of the GFW has significantly escalated state-level information control. To enforce censorship policies, the Chinese government has augmented the GFW with sophisticated techniques that not only discover and block, but also actively attack anti-censorship tools. In this presentation, I will give a detailed overview of my research on this topic. First, I present how the Chinese government used its “Great Cannon” to attack CloudFlare and GitHub because they hosted mirrors of Greatfire.org (a website that distributes information about GFW censorship and circumvention). Then, I will describe how my co-authors and I used new side channel techniques to investigate the GFW over time and space. These side-channel techniques scale well, allowing us to answer questions that were previously out of reach. At the same time, the techniques are designed with ethical considerations in mind: they do not require active participation by clients behind the GFW and thus avoid exposing activists to legal reprisals by the Chinese government. Finally, I will paint a detailed picture of the GFW’s active probing system, which is deployed to detect and block hidden circumvention tools. I will show that the system makes use of a large number of IP addresses, provide evidence that all these IP addresses are centrally controlled, and determine the location of the Great Firewall’s sensors.

Bio:
Roya Ensafi is a Postdoctoral Research Associate in Computer Science Department and a research fellow at the Center for Information Technology Policy (CITP) at Princeton University. Her research work focuses on computer networking and security, with an emphasis on network measurement. The primary goal of her current research is to better understand and bring transparency to network interference (e.g. censorship) by designing new tools and techniques. In her dissertation, which passed with distinction, Roya developed side channels to remotely measure TCP connectivity between two hosts, without requiring access to any of the hosts. Most of her latest research projects center around studying national firewalls, especially the Great Firewall of China (GFW). Her work studying how the Great Firewall of China discovers hidden circumvention servers received an IRTF Applied Networking Research Prize (ANRP) in 2016. Her work has been published in the USENIX Security Symposium, the ACM Internet Measurement Conference, and the Symposium on Privacy Enhancing Technologies. While a Ph.D. student at the University of New Mexico, she received the Sigma Xi Research Excellence and the UNM Best Graduate Student Mentor awards.

Position: Ph.D. Candidate

Current Institution: University of Illinois at Urbana-Champaign

Abstract:

Today, algorithms exert great power in the curation of everyday online content by prioritizing, classifying, associating, and filtering information. This power can shape the users’ experience and even the evolution of the system as a whole. For instance, believing that YouTube’s recommendation algorithm gave significant weight to a video response made to another video, a group of girls, known as “Reply Girls”, uploaded irrelevant video responses to already popular videos to move the video responses to the top of the suggested videos list. In an attempt to increase their view counts, “Reply Girls” also added sexually suggestive thumbnails to their posts and earned upwards of tens of thousands of dollars in ad sharing revenue. In another example, some Facebook users complained about or even tended to block new mothers in their News Feed, asserting that new mothers exclusively posted photos of their babies. However, it was found that Facebook News Feed curation algorithm created this misperception because it prioritizes posts that receive likes and comments – photos of babies often received attention from a large audience.

While such powerful algorithms are omnipresent online, they are rarely highlighted in the interface, leaving users unaware of their presence. Even in cases where users are aware of these algorithms’ presence, the black boxes that house them usually prevent users from understanding the details of their functionality. While this opaqueness often exists to protect intellectual property, it also stems in part from the merits of “seamless” design, where designers hide details from users to make interactions “effortless”. However, some now argue that adding visibility into system boundaries through the revelation of “seams” helps people become more adaptive, innovative and intelligent users. For example, a site such as Kayak.com practices seamful design through its reliance on pop-up windows to establish credibility for competitive price quotes. However, such approaches have guided very few efforts in the domain of algorithms.

Detecting bias or even discrimination that an algorithm can cause is another reason for revealing algorithms’ existence or functionality in online platforms. Creating discriminatory ads based on gender or race, showing different prices for the same products/services to different users and mistakenly labeling a black man for an ape by the Flicker image automatic tagging algorithm are some examples of biases and discrimination resulting from the black boxing of algorithms. The increasing prevalence of online curation algorithms coupled with their aforementioned substantial influences raises many questions:

RQ1-Algorithm Awareness: How knowledgeable are users about these algorithms, and how aware should they be of their existence and functionality?

RQ2-Awareness Effects: If we can provide insight to users about an algorithm’s existence or
functionality, how will this insight affect their interaction experience?

RQ3-Algorithm Bias: How can we detect whether algorithms have biases that affect users’
experiences?

In this presentation, I present innovative approaches that we have explored and plan to explore in the future to answer the above questions.

Bio:
Motahhare Eslami is a 5th year Ph.D. candidate at Computer Science department, University of Illinois at Urbana-Champaign. Her research interests are in social computing, human computer interaction and data mining areas. She is interested in performing research to analyze and understand people’s behavior in online social networks. Her recent work has focused on the effects of feed personalization in social media and how the awareness of filtering algorithm’s existence affects users’ perception and behavior. Her work has published at prestigious conferences and also appeared internationally in the press- in the Washington Post, TIME, MIT Technology Review, New Scientist, the BBC, CBC Radio, Oglobo (a prominent Brazilian newspaper), numerous blogs, Fortune, and more. Her research has received honorable mention award at Facebook Midwest Regional Hackathon 2013 and the best paper award at CHI 2015. Motahhare has been nominated as a Google PhD Fellowship Nominee (2015 and 2016) and a Facebook PhD Fellowship Finalist (2016).

Position: Ph.D. Candidate and Research Assistant

Current Institution: University of Illinois at Urbana-Champaign

Abstract:
Verifying and Debugging Smart Cyber-physical Systems

The 21st century has been witnessing a phenomenal growth in cyber-physical systems (CPS), which tightly couple physical processes with software, networks, and sensing. Unmanned aerial vehicles are starting to share increasingly crowded airspace with commercial and passenger air traffic, autonomous satellites will soon coordinate with one another and service aging satellites, networked medical devices are being implanted, ingested, and injected in humans, and tomorrow’s cars may drive themselves. Reliability and security lapses of such cyber-physical systems routinely disrupt communities, and on many occasions have led to catastrophic failures, with major damage to infrastructure and people. Fortunately, recent advances in verification tools for nonlinear hybrid systems have brought them to the threshold of solving real-world embedded system design and analysis problems. Early successful applications have been demonstrated in automotive powertrain control systems, medical devices, and power plants.

A common way to model cyber-physical systems is as hybrid systems or networks. Current testing and debugging approaches for CPS in the industry are best-effort at best. A test suite covers a small fraction of behaviors. It is attractive because its computational burden is lower and it requires only executable models which are often available from the design, but it suffers from incompleteness. This leaves out many other behaviors untested which can later appear as serious bugs. Formal verification techniques, on the other hand, provide a mathematical proof that the system behaves in a safe manner or has an invariant for all (possibly infinite) initial configurations or parameters of the system.

Despite the recent success of existing approaches in analyzing hybrid and embedded systems, several challenges remain to its widespread applicability: they are either applicable to only restricted classes of systems, overly conservative, or computationally expensive. The method I will present is about overcoming these challenges and computing system-level guarantees from traces for large nonlinear hybrid systems. It enables the rapid safety or invariance verification that has been demonstrated to scale to industrial designs. The key insight that enables the method to overcome the technical challenges is that it combines the speed of numerical simulations with automatic analysis of design models. I will also introduce the core algorithms of our method which automatically compute the reachable states of the system using simulations as guidance, while preserving the original soundness and relative completeness guarantees.

We have also developed the award winning tool Compare Execute Check Engine (C2E2) which implements this verification technique. C2E2 has successfully been applied for verifying the invariance of various sophisticated hybrid systems: a suite of powertrain control systems from Toyota, medical devices in conjunction with models of human physiology, a parallel aircraft landing protocol from NASA, etc.

Bio:

I am Chuchu Fan, a Ph.D. candidate in the department of ECE at the University of Illinois at Urbana-Champaign, working as a research assistant in Prof. Sayan Mitra’s group since 2013. Concurrently, I am the technical director and principal investigator of a startup company, Rational Cyphy Inc., leading a group to develop software for certifying and debugging cyber-physical systems like Driver Assistance and Automated Driving systems. Before joining Prof. Mitra’s group, I received my Bachelor of Science degree in 2013 with the highest honor in the Department of Automation from Tsinghua University, Beijing. I also worked as a visiting scholar in Prof. Laurent Itti’s group at the University of Southern California in 2012. I am the recipient of several prestigious awards including the Rambus Fellowship (2016), Best Verification Result Award in CPS Week (2015), Soar Foundation Fellowship (2012) and Samsung Fellowship (2011).

I have been working on cyber-physical systems by developing and applying formal verification methods to them. My research mainly focuses on providing formal guarantees on 1) invariant verification/falsification and 2) closeness relationship among serial models, which are two important and pervasive aspects of cyber-physical systems. My work has been published and presented at several top academic conferences, for example, Computer Aided Verification (CAV 2014, 2015, 2016), Automated Technology for Verification and Analysis (ATVA 2015), Cyber-Physical Week (CPS Week 2015), Embedded Software (EMSOFT 2016). Specifically, my work on verifying Toyota’s Powertrain controller received the Robert Bosch Most Promising Verification Result at CPS Week 2015. I have also published my findings in several academic journals: IFAC Nonlinear Analysis: Hybrid Systems (2016), IEEE Design & Test (2015), and IEEE Signal Processing Letters (2013). Meanwhile, I serve as the reviewer and sub-reviewer for several international conferences and workshops on cyber-physical systems and formal methods. I am also one of the developers of the formal verification tool Compare Execute Check Engine.

I have gained much industrial experience through several internships: I worked as a research intern at Toyota Technical Center in 2015, where I collaborated with the model-based design group on advanced techniques to verify the safety of large-scale embedded systems in vehicles; in 2012, I worked at Microsoft Research in Asia as an intern in the mobile and sensing systems group, during which I earned a lot of knowledge on developing embedded software for mobile phones.

I have been active on academic communication activities as well. I am a selected participant of the French-American Doctoral Exchange Seminar in 2016 and CRA-Women Grad Cohort Workshop. I also served as the vice president of the Students Association of Science and Technology in Tsinghua Unversity for 2 years.

Position: Graduate Student Researcher

Current Institution: University of California Santa Barbara

 

Abstract:
Verification Techniques for Hardware Security

Verification for hardware security has become increasingly important in recent years as our infrastructure is heavily dependent on electronic systems. Traditional verification methods and metrics attempt to answer the question: does my design correctly perform the intended specified functionality? The question my research addresses is: does my design perform malicious functionality in addition to the intended functionality? Malicious functionality inserted into a chip is called a Hardware Trojan.

My research is devoted to developing both new threat models and detection methodologies for a less studied but extremely stealthy class of Trojan: Trojans which do not rely on rare triggering conditions to stay hidden, but instead only alter the logic functions of design signals which have unspecified behavior, meaning the Trojan never violates the design specification. The main contributions of my work are 1) precise definitions for dangerous unspecified functionality in terms of information leakage and several methods to identify such functionality, 2) satisfiability-based formal methods to test potentially dangerous unspecified functionality for the existence of Trojans, and 3) numerous examples of how the proposed Trojans can completely undermine system security if inserted in on-chip bus systems, communication controllers, and encryption IP.
Bio:
Nicole Fern received her undergraduate degree in Electrical Engineering from The Cooper Union for the Advancement of Science and Art in 2011. After graduation, she started working towards her combined Masters and Ph.D. degree in the Electrical and Computer Engineering department at UC Santa Barbara under the advisement of Professor Tim Cheng in the SoC Design and Test Lab. Her research interests include hardware verification and security. Her thesis focuses on identifying and verifying unspecified design functionality susceptible to malicious manipulation. She expects to graduate in June of 2016 and plans to continue her research as a post-doctoral researcher at UC Santa Barbara and as a visiting scholar at Hong Kong University of Science and Technology. Other hobbies include trail running, pottery, and drawing.

Position: Postdoctoral Fellow

Current Institution: Stanford University

Abstract:
Informative Projection Ensembles: Theory and Algorithms for Interpretable Models

Predictive systems designed to keep human operators in the loop rely on the existence of comprehensible classification models, which facilitate the decision process by presenting interpretable and intuitive views of the data. Often, the domain experts require that the test data be represented using a small number of the original features. This serves to validate the classification outcome by highlighting the similarities to relevant training data. We present Informative Projection Ensembles, a framework designed to extract compact and communicative models, fundamental to decision support systems. Informative Projection Ensembles alternatively use one of several compact submodels that ensure compliance with the stringent requirement on model size, while also attaining high performance through specialization. The decision is presented to the user together with a depiction of how the classification label was assigned. In addition to the complexity bounds for the ensembles, we present case studies of how our framework makes automatic classification transparent by revealing previously unknown patterns in biomedical data.

We provide strong statistical convergence guarantees for our ensembles, when used with specific classes of groupings and predictors quantifying the impact of various parameters to the complexity of the problem. Specifically, we apply compression-based bounds for analyzing the problem of simultaneously learning groups and predictors. Our results illustrate how the complexity of the ensembles scales with parameters of the underlying structure rather than the original dimension of the ambient space. We present a variety of techniques that construct Informative Projections, allowing a more precise recovery of patterns existing in data. The learning procedure is flexible, not only in terms of the hypothesis classes for the local models and for the selection function, but also in terms of model optimization. The current tools allow a trade-off between model fidelity and learning speed. Our experiments show that the methods we introduce can discover and leverage low-dimensional structure in data, if it exists, yielding models that are accurate, compact and interpretable.

One of our case studies targets osteoarthritis, a major chronic disease that we still cannot treat. This is partly due to lack of known biomarkers that can predict disease progression, which are actively sought by the orthopedics research community and the pharmaceutical industry. We analyze public data from the FNIH Osteoarthritis Progression Biomarkers project. Given 144 candidate biomarkers, for 300 patients, the task is to identify the most effective biomarkers indicating whether a patient would progress after four years. Our ensemble pinpoints a set of features based solely on measurements from the baseline visit. This means that we would be able to screen patients at one single time point and predict whether their joint health will worsen over the course of the following few years based on the volumes of particular regions in their meniscus and cartilage. This, in turn, has significant implications for osteoarthritis management and prevention in that the identified anatomical structures may be used as targets to test the effect of novel drugs in clinical trials.

Bio:

Madalina Fiterau is a Postdoctoral Fellow in the Computer Science Department at Stanford University, working with Professors Chris Re and Scott Delp in the Mobilize Center. Madalina has obtained a PhD in Machine Learning from Carnegie Mellon University in September 2015, advised by Professor Artur Dubrawski. Madalina also holds a B. Eng. from the Politehnica University of Timisoara, Romania.
The focus of her PhD thesis, entitled “Discovering Compact and Informative Structures through Data Partitioning”, was on learning interpretable ensembles, with applicability ranging from image classification to a clinical alert prediction system. Madalina is currently expanding her research on interpretable models, in part by applying deep learning to obtain salient representations from biomedical “deep” data, including time series, text and images. The ultimate goal is to fuse these representations with structured biomedical data to form comprehensive models for clinical instability as well as medical conditions such as cerebral palsy, osteoarthritis, obesity and running injuries.

Madalina is the recipient of the GE Foundation Scholar Leader Award for Central and Eastern
Europe. Her paper, “Deep Neural Decision Forests”, has received the Marr Prize for Best Paper
at ICCV 2015. Also, her presentation entitled “Using expert review to calibrate semi-automated
adjudication of vital sign alerts in Step Down Units”, has won a Star Research Award at the
Annual Congress of the Society of Critical Care Medicine 2016. She has organized two editions of the Machine Learning for Clinical Data Analysis at the Neural Information Processing Systems Conference (NIPS), in 2013 and 2014. She has also published papers and is in the Program Committee for top conferences and journals (NIPS, ICML, AAAI, IJCAI, JBI). She currently holds a fellowship supported by the National Institute of Health (NIH).

Position: Postdoctoral Associate

Current Institution: Verily

Abstract:
Optical Design Considerations for High Conversion Efficiency in Photovoltaics

Improvements in the efficiency of photovoltaics lowers cost, as higher efficiency will lower overhead costs of installation, maintenance, and grid integration. For high efficiency photovoltaics, optimization of both current and voltage is necessary. High current is achieved by absorbing most of the above bandgap photons, and then extracting the resulting photo-generated electrons and holes. To achieve high absorption in thin films, surface texturing is necessary. Surface texturing allows for absorption enhancement, also known as light trapping, due to total internal reflection. However, in subwavelength-thickness solar cells (~100 nm thick), the theory of light trapping is not understood, and both the maximum achievable absorption and the optimal surface texture are open questions. Computational electromagnetic optimization is used to find surface textures yielding an absorption enhancement of 40 times the absorption in a flat solar cell, the highest enhancement achieved in a subwavelength-thick solar cell with a high index of refraction. The optimization makes use of adjoint gradient methods, which allow the problem of designing a 3D surface to be computationally tractable.

However, while high current requires high absorption, high voltage requires re-emission of the absorbed photons out of the front surface of the photovoltaic cell. This re-emission out the front of the solar cell is required by the detailed balance formulism outlined by Shockley and Quiesser in 1961. At the open circuit voltage condition, where no current is collected, ideally all absorbed photons are eventually re-emitted out the front surface of the solar cell. The small escape cone for a semiconductor/air interface, as described by Snell’s law, makes it difficult for the photon to escape out of the front surface; it is much more likely for the luminescent photon to be lost to an absorbing back substrate. Thus, a back reflector on a solar cell is crucial to obtaining high voltage, as it helps the internally emitted photons in the cell escape out of the front surface. The open circuit voltage difference between a solar cell with a back mirror and a solar cell with an absorbing substrate is quantified, and it is found that the benefit of using a back mirror depends on the absorptivity of the solar cell material. The back mirror concept is extended to the sub-cells of a multijunction cell, and an air gap as an “intermediate” reflector is proposed and analyzed. In a dual junction solar cell, it is shown that proper mirror design with air gaps and antireflection coatings leads to an increase in open circuit voltage, resulting in a ~5% absolute efficiency increase in the solar cell. This concept has been validated experimentally in a 38.8% efficient 4-junction cell created by the National Renewable Energy Laboratory.

Bio:
Vidya Ganapati is a Postdoctoral Associate at Verily (formerly Google[x] Life Sciences), working on robotic surgery. She received her Ph.D in Electrical Engineering & Computer Science at the University of California, Berkeley in 2015 and was advised by Prof. Eli Yablonovitch. Her graduate work centered on the optimization and thermodynamics of high efficiency photovoltaics. She was a recipient of the Department of Energy Office of Science Graduate Fellowship and the UC Berkeley Chancellor’s Fellowship. Her undergraduate research at the Massachusetts Institute of Technology was advised by Prof. Tonio Buonassisi and focused on imaging microdefects in multicrystalline silicon with infrared birefringence. Her current research interests include applying optimization algorithms to applications in photovoltaics, renewable energy systems, and bioimaging.

Position: Ph.D. Candidate

Current Institution: Carnegie Mellon University

Abstract:
Analyzing Response Time in Systems with Redundant Requests

Reducing latency is a primary concern in computer systems. As cloud computing and resource sharing become more prevalent, the problem of how to reduce latency becomes more challenging because there is a high degree of variability in server speeds. Recent computer systems research has shown that the same job can take 12x or even 27x longer to run on one machine than another, due to varying background load, garbage collection, network contention, and other factors. This server variability is transient and unpredictable, making it hard to know how long a job will take to run on any given server, and therefore how best to dispatch and schedule jobs.

An increasingly popular strategy for combating server variability is redundancy. The idea is to create multiple copies of the same job, dispatch these copies to different servers, and wait for the first copy to complete service. A great deal of empirical computer systems research has demonstrated the benefits of redundancy: using redundancy can yield up to a 50% reduction in mean response time. As redundancy gains prominence in systems, in recent years the theoretical community has begun attempting to analyze performance in systems with redundancy. Unfortunately, most of this work provides only bounds and approximations for performance, and most makes simplifying assumptions for mathematical tractability that lead to unrealistically optimistic results.

The two major foci of my work are (1) to derive the first exact analysis of response time in systems with redundancy, and (2) to develop a more realistic model of redundancy and to design and analyze dispatching and scheduling policies that yield good performance within this model. We show that even in a realistic setting in which running multiple copies of the same job adds load to the system, it is possible to design redundancy policies that provably improve performance.

Bio:
Kristy is a PhD candidate in the Computer Science Department at Carnegie Mellon University, where she works with Mor Harchol-Balter. Her research interests are in queueing theory and performance modeling of computer systems. Her current work focuses on analyzing performance in systems with redundant requests, in which jobs create multiple copies of themselves but require only one copy to complete service. Kristy received an NSF Graduate Research Fellowship in 2012 and a Google Anita Borg Memorial Scholarship in 2016. She obtained her B.A. in Computer Science from Amherst College in 2012.

Email

Website

Position: Graduate Student

Current Institution: MIT

Abstract:
Estimating the Response and Effect of Clinical Interventions

Much prior work in clinical modeling has focused on building discriminative models to detect specific easily coded outcomes with little clinical utility (e.g., hospital mortality) under specific ICU settings, or understanding the predictive value of various types of clinical information without taking interventions into account. In this work, we focus on understanding the impact of interventions on the underlying physiological reserve of patients in different clinical settings. Reserve can be thought of as the latent variability in patient response to treatment after accounting for their observed state.

In this work, we focus on understanding the impact of interventions on the underlying physiological reserve of patients in different clinical settings. Reserve can be thought of as the latent variability in patient response to treatment after accounting for their observed state. Understanding reserve is therefore important to performing successful interventions, and can be used in many clinical settings. I attempt to understand reserve in response to intervention in two settings: 1) the response of intensive care unit (ICU) patients to common clinical interventions in the ICU, and 2) the response of voice patients to behavioral and surgical treatments in an ambulatory outpatient

I attempt to understand reserve in response to intervention in two settings: 1) the response of intensive care unit (ICU) patients to common clinical interventions in the ICU, and 2) the response of voice patients to behavioral and surgical treatments in an ambulatory outpatient setting. In both settings, we use large sets of clinical data to investigate whether specific interventions are meaningful to patients in an empirically sound way.

Bio:
Marzyeh Ghassemi is a PhD student in the Clinical Decision Making Group (MEDG) in MIT’s
Computer Science and Artificial Intelligence Lab (CSAIL) supervised by Prof. Peter Szolovits. Her research uses machine learning techniques and statistical modeling to predict and stratify relevant human risks.

Marzyeh is interested in creating probabilistic latent variable models to estimate the underlying physiological state of patients during critical illnesses. She is also interested in understanding the development and progression of conditions like hearing loss and vocal hyperfunction using a combination of sensor data, clinical observations, and other physiological measurements.

While at MIT, Marzyeh has served on MIT’s Women’s Advisory Group Presidential Committee,
as Connection Chair to the Women in Machine Learning Workshop, on MIT’s Corporation Joint Advisory Committee on Institute-wide Affairs, and on MIT’s Committee on Foreign Scholarships. Prior to MIT, Marzyeh received two B.S. degrees in computer science and electrical engineering with a minor in applied mathematics from New Mexico State University as a Goldwater Scholar, and a MSc. degree in biomedical engineering from Oxford University as a Marshall Scholar. She also worked at Intel Corporation in the Rotation Engineering Program, and then as a Market Development Manager for the Emerging Markets Platform Group.

Position: Analog Engineer

Current Institution: Intel

Abstract:
Ultra-Low Power Multi-Channel Data Conversion with a Single SAR ADC for Mobile SensingApplications

Traditional data compression for sparse signals like bio-signals and image signals is subsequent to front-end Nyquist-rate data sampling. Recently emerging compressive sensing (CS) theory suggests that the signal sparsity can be exploited to enable sub-Nyquist rate sampling and thus save commensurate power and hardware-complexity of the front-end sensor. Although in recent years CS has been actively exploited to reconstruct a single-channel signal at sub-Nyquist rate in the applications of cognitive radios and bio-sensors, very few previous works cover its feasibility on multi-channel ADCs. Besides, most previous single-channel works use active components like integrators and op-amps to perform the weighted summation operation required by the CS process, which is area and power-consuming and severely limits the linearity. An area and power-efficient, high linearity architecture for multi-channel CS-based ADC is potential. Our work proposes a CS-based SAR ADC which is capable of simultaneously converting 4-channel sparse signals at the Nyquist rate of one channel. The chip is fabricated in a 0.13μm CMOS
process. Operating at 1MS/s, the SAR ADC itself achieves a 66dB SNDR and a 25fJ/step FoM at 0.8V. Using convex optimization methods, 4- channel 500kHz-bandwidth signals can be reconstructed with a 66dB peak SNDR and a 41% max occupancy, leading to an effective FoM per channel of 6.25 fJ/step.

Bio:
Wenjuan Guo received the B.S. degree from Tsinghua University, Beijing, China, in 2011 and the Ph.D. degree from The University of Texas at Austin, TX, USA, in 2016. Her Ph.D. dissertation is focused on ultra-low-power high performance SAR ADC design. During her Ph.D., she taped out three chips. After Ph.D. graduation, she continued to work with her supervisor, Dr. Nan Sun, as a Post-Doctoral Fellow and extended her research to high-resolution time-to-digital converter (TDC) design. Within 3 months, she taped out another two chips. From June 2013 to May 2014, she was a Design Intern in the DAC team of Texas Instruments, Dallas, TX. She worked on a 16-bit R-2R DAC design. She received Texas Instrument Fellowship in 2014 and 2015. Currently she is an analog design engineer in the analog-and-mixed-signal team of Intel, Austin, TX, USA.

Position: PhD student Final year

Current Institution: Duke University

Abstract:
Differential Privacy in the Wild: current practices & open challenges

Differential privacy has emerged as an important standard for privacy preserving computation over databases containing sensitive information about individuals. Research on differential privacy spanning a number of research areas, including theory, security, database, networks, machine learning, and statistics, over the last decade has resulted in a variety of privacy preserving algorithms for a number of analysis tasks. Despite maturing research efforts, the adoption of differential privacy by practitioners in industry, academia, or government agencies has so far been rare. In this talk, we will cover the state of the art techniques in differentially private computation on tabular data, highlight real world applications on complex data types, and identify research challenges in applying differential privacy to real world applications

Bio:

Xi He is a fourth year Ph.D. student at Computer Science Department, Duke University. Her research interests lie in privacy-preserving data analysis and security. She has also received an M.S from Duke University and double degree in Applied Mathematics and Computer Science from University of Singapore. Xi has been working with Prof. Machanavajjhala on differential privacy since 2012, and has published several work in SIGMOD and VLDB. Her research work on ‘Differential Privacy Trajectories Synthesis’ has been awarded as ‘the Outstanding Ph.D. Research Initial Project Award’ by Duke CS Department in 2014. She has been selected as a member of the U.S. delegation to the 2nd Heidelberg Laureate Forum. As a female CS researcher, she is also a recipient of Grace Hopper Conference Scholarship Grant in 2014 and lead Duke ACM-W chapter as president in 2015-2016.

Position: Postdoctoral Research Fellow

Current Institution: Stanford University

Abstract:
Measurement of the Spin Hall Effect in Monolayer WSe2 2D Materials for Applications in Spin-Based Computing

The possibility of encoding information in spin for more energy efficient switching in computing has given rise to the thriving field of spintronics (1). But, despite the possible energy benefits for memory and logic, a high current density is required to switch a nanomagnet via spin transfer torque current, which can lead to device degradation via heating. The discovery of the spin Hall effect in heavy metals such as Ta and Pt has lead to reduction in the magnetic switching current density by 10⨉ (2). The spin Hall effect produces a spin current perpendicular to the charge current due to strong spin-orbit coupling in the heavy metal.

The research community is also excited by transition metal dichalcogenide (TMD) 2D materials such as WSe2 for highly-scaled transistor applications. It is predicted that monolayer WSe2 has a high spin-orbit coupling (3). Thus, in addition to the valley Hall effect, it should exhibit a spin Hall effect.

We report results on measuring the spin Hall effect in monolayer WSe2 via the magneto-optical Kerr effect. We exfoliate a WSe2 flake onto Si(substrate) / SiO2(285 nm) and pattern it using electron-beam lithography into a 5 μm wide, 11 μm long rectangle. We then do a second lithography step to place 40 nm thick Pd electrodes to p-dope the flake and act as source/drain contacts. The sample is back-gated through the SiO2.

We measure the spin accumulation using an optical setup. The sample is in a 3 ⨉ 10-6 Torr and 78 Kelvin environment. We linearly polarize a 672 nm continuous-wave laser and measure the change in polarization angle of the light reflected off the WSe2. The sample is rastered under the laser spot (0.5 μm beam diameter) in 0.5 μm steps. We apply DC back-gate voltage VG = -90 V and AC drain voltage VD = 10 V at 2.5 kHz, with the source terminal grounded. Our Kerr rotation mapping of the flake shows opposite change in polarization angle on the edges of the flake transverse to the current direction, indicative of opposite spin accumulation on the two edges. We measure a maximum Kerr rotation of Δ

Bio:

I received a B.A. in physics (and philosophy) from UC Berkeley in 2008 and a Ph.D. in physics from Harvard University in 2015. For my dissertation research I was cross-registered at Harvard and MIT, and I had advisors from both universities. My primary advisor was in electrical engineering at MIT. I am now a postdoctoral fellow in electrical engineering at Stanford, and I will apply to faculty positions this Fall/Winter. I am excited by a career as an academic researcher, where I can combine my growing strengths in research, teaching and mentoring, effective science and engineering communication, and grant writing.

My research is at the intersection of electrical engineering, physics, and materials science. In general I am interested in using emerging materials and physics for more energy efficient computing, from the transistor through the system level. During my Ph.D. I designed and fabricated nanotechnology devices for more energy-efficient logic using magnetic materials and magnetic domain walls, resulting in device and circuit prototypes. I am now working on extending these types of spintronics devices to full computing systems and integrating them with silicon transistors, 2D materials, and carbon nanotube transistors. I am also working on exploring new physics for spin-based switches, for example the spin Hall effect in 2D materials.

I have 16 research publications in refereed journals, 1 patent (first inventor), and have given 19 research talks at conferences and universities, including 7 invited talks. I was a speaker at the 2015 IEDM conference. During my Ph.D., I received a Department of Energy graduate research fellowship that provided 3 years of full support. I co-wrote a successful NSF grant with my Ph.D. advisor, and recently helped write a multi-university NSF Engineering Research Center grant.

In addition to the research I have done at Stanford, MIT, and Harvard, I worked as an R&D consultant for Applied Materials Varian from 2013-2014; did my undergraduate dissertation research in atomic/molecular/optical physics at the University of Auckland in 2008; did research in x-ray mammography at the University of Pennsylvania in 2007; and did research in experimental cosmology at Lawrence Berkeley National Laboratory from 2005-2007. This basis in a broad range of research topics allows me to better communicate with researchers outside my specific expertise.

I have been involved in teaching and mentoring throughout my Ph.D. and postdoc. Most recently
I helped my Stanford advisor teach a freshman course called “What is Nanotechnology.” I was a guest lecturer and lead the students’ hands-on project where they built and tested their own silicon transistors.

I strongly believe that outreach is an important part of a researcher’s career. I am currently volunteering with an education nonprofit called STEMBusUSA that encourages STEM excitement in K-12 students. This program reaches over 30,000 students nationwide. I am helping them develop benchmarking metrics to understand the effectiveness of their programs. In the past I lead a monthly event that promoted science and engineering to the public, growing the program from 20 to 100+ monthly attendees.

Position: Postdoctoral Fellow

Current Institution: Carnegie Mellon University

Abstract:
Data-driven synthesis and evaluation of syntactic facial expressions in ASL animation

Deaf adults using sign language as a primary means of communication tend to have low literacy skills in written languages due to limited spoken language exposure and other educational factors. For example, standardized testing in the U.S. reveals that a majority of deaf high school graduates perform at or below a fourth-grade English reading level. If the reading level of text on websites, television captioning, or other media is too complex, these adults may not comprehend the conveyed message despite having read the text. The number of people using sign language as a primary means of communication is considerable: 500,000 in the U.S. (American Sign Language – ASL) and 70 million worldwide. Technology to automatically synthesize linguistically accurate and natural-looking sign language animations can increase information accessibility for this population. State-of-art sign language animation tools focus mostly on

State-of-art sign language animation tools focus mostly on accuracy of manual signs rather than on facial expressions. We investigate the synthesis of syntactic ASL facial expressions, which are grammatically required and essential to the meaning of ASL animations as shown by prior research. Specifically, we show that an annotated sign language corpus, including both the manual and non-manual signs, can be used to model and generate linguistically meaningful facial expressions, if it is combined with facial feature extraction techniques, statistical machine learning, and an animation platform with detailed facial parameterization. Our synthesis approach uses a data-driven methodology in which recordings of human ASL signers are used as a basis for generating face and head movements for animation. We train our models with facial expression examples that are represented as MPEG-4 facial action time series extracted from an ASL video corpus using computer vision based face-tracking. To avoid idiosyncratic aspects of a single performance, we model a facial expression based on the underlying trace of movements learned from multiple recordings of different sentences where such expressions occur. Latent traces are obtained using Continuous Profile Models (CPM), which are probabilistic generative models building upon Hidden Markov Models. To support

Our synthesis approach uses a data-driven methodology in which recordings of human ASL signers are used as a basis for generating face and head movements for animation. We train our models with facial expression examples that are represented as MPEG-4 facial action time series extracted from an ASL video corpus using computer vision based face-tracking. To avoid idiosyncratic aspects of a single performance, we model a facial expression based on the underlying trace of movements learned from multiple recordings of different sentences where such expressions occur. Latent traces are obtained using Continuous Profile Models (CPM), which are probabilistic generative models building upon Hidden Markov Models. To support generation of ASL animations with facial expressions, we enhanced a virtual human character in the open source animation platform EMBR with face controls following the MPEG-4 Facial Animation standard, ASL hand shapes, and a pipeline to embed MPEG4 facial expression streams in ASL sentences represented as EMBR scripts with body movement information. We assessed our modeling approach through comparison with an alternative centroid approach, where a single representative performance was selected by minimizing DTW distance from the other examples. Through both metric evaluation and an experimental user study with Deaf participants, we found that the facial expressions driven by our CPM models produce high-quality facial expressions that are more similar to

We assessed our modeling approach through comparison with an alternative centroid approach, where a single representative performance was selected by minimizing DTW distance from the other examples. Through both metric evaluation and an experimental user study with Deaf participants, we found that the facial expressions driven by our CPM models produce high-quality facial expressions that are more similar to human performance of novel sentences. Our user study draws from our prior work in rigorous methodological research on how experiment design affects study outcomes when evaluating sign language animations with facial expressions.

Bio:

Hernisa Kacorri is a Post Doctoral Fellow at the Human-Computer Interaction Institute at Carnegie Mellon University. As a member of the Cognitive Assistance Lab she works with Chieko Asakawa, Kris Kitani, and Jeff Bigham to help people with visual impairment understand the surrounding world. She recently received her Ph.D. in Computer Science from the Graduate Center CUNY, as a member of the Linguistic and Assistive Technologies Lab at CUNY and RIT, advised by Matt Huenerfauth. Her dissertation focused on developing mathematical models of human facial expressions for synthesizing animations of American Sign Language that are linguistically accurate and easy to understand. She designed and conducted experimental research studies with deaf and hard-of-hearing participants, and created a framework for rapid prototyping and generation of animations of American Sign Language for empirical evaluation studies. To support her research, she contributed software to enhance the open source EMBR animation platform with MPEG-4 based facial expression and released an experimental stimuli and question dataset for benchmarking future studies. As part of the emerging field of human-data interaction, her work lies at the intersection of accessibility, computational linguistics, and applied machine learning. Her research was supported by NSF, CUNY Science Fellowship, and Mina Rees Dissertation Fellowship in the Sciences. During her Ph.D. Hernisa also visited, as a research intern, the Accessibility Research Group at IBM Research – Tokyo (2013) and the Data Science and Technology Group at Lawrence Berkeley National Lab (2015).
Hernisa’s research interest in accessibility was sparked at the National and Kapodistrian University of Athens, where she earned her M.S. and B.S. degrees in Computer Science and was a member of the Speech and Accessibility Lab, supervised by Georgios Kouroupetroglou. She was involved in a number of research projects that supported people with disabilities in Higher Education. She developed software to support audio rendering of MathML in Greek, contributed to MathPlayer, and developed an 8-dot Braille code for Greek monotonic and polytonic writing systems. She served as one of the two Assistive Technologies Specialists at the University of Athens, taught at national vocational training programs for blind students, and led seminars and workshops promoting accessibility.

Position: Ph.D. Student/Graduate Research Assistant

Current Institution: Boston University

Abstract:

Technology scaling accompanied with saturation in voltage scaling leads to continued increase in on-chip power density. Power density has reached such high levels that, in current processors, not all transistors on the chip can be powered on at full performance without exceeding thermal constraints, referred to as the dark silicon phenomena. According to ITRS projections, 50% of the silicon area will be dark for the 8nm technology node. High temperature not only limits the performance, but it also degrades energy efficiency due to the exponential relationship between leakage power and temperature, and decreases processor reliability. In fact, 10-15°C rise in temperature can shorten the processor lifetime by half. When considering larger scale computing, high temperature translates to the energy efficiency problem in data centers. The rapid worldwide growth in the data center energy requirements reached to 43GW in 2013. In today’s data centers, over 30% of the electricity is consumed for cooling the data center, which makes cooling efficiency among the top challenges in exascale computing.

To overcome this problem, using Phase Change Materials (PCMs) has been proposed as a passive cooling solution. PCMs are compounds that store large amounts of heat at a near-constant temperature during phase change. Owing to this heat storage capability, PCM acts like a thermal buffer. This property of PCM can be leveraged as part of performance boosting techniques such as computational sprinting, which is temporarily exceeding the thermal design power (TDP) of the chip by activating the dark cores during short bursts of high computational demand. PCM extends the sprinting duration and thus, provides additional performance gain.

There is significant room for improvement in the design and operation of systems with PCM-based cooling. To unleash the true potential of PCM, it is necessary to have thermal models, which will enable extensive design space exploration and true evaluation of those systems in a fast and accurate manner. It is also crucial to develop runtime management techniques that will exploit PCM behavior to maximize its benefits. To this end, our work advances the latest research on PCM-based processor cooling by contributing the following:
• We develop a detailed PCM thermal model to be used in the design and evaluation of
processors with PCM-enhanced cooling. We validate the accuracy of our model by comparing against computational fluid dynamics (CFD) simulations.
• We demonstrate the feasibility of PCM cooling and our thermal model on a hardware testbed
with a PCM unit installed on top of the package. • We propose a runtime management policy to maximize the benefits of PCM. Our PCM-Aware Adaptive Sprinting policy is motivated by the fact that PCM melts at different rates across the chip. Our policy exploits this fact via (i) monitoring the remaining unmelted PCM at runtime and (ii) adapting to the changes in PCM state by changing on the number, location and voltage/frequency setting of the sprinting cores. Experimental evaluation shows that Adaptive Sprinting provides 29% performance improvement and 22% energy savings in comparison to the state-of-the-art sprinting strategy.

Bio:

I am a 5th year PhD candidate in the Department of Electrical and Computer Engineering (ECE), Boston University (BU), Boston, MA. I received my B.S. degree in Electrical and Electronics Engineering from the Middle East Technical University (METU), Turkey, in 2011. I graduated with a cumulative GPA of 3.83 and was ranked 8th out of 247 senior year students in METU EEE Department. In the PhD program, I finished my coursework requirement and passed my PhD qualifier exam in June 2013. I also passed my PhD prospectus defense exam in October 2015. I am currently a graduate research assistant working with Prof. Ayse Coskun and expect to graduate in early 2017. My research interests include thermal modeling and runtime management in processors and data centers. I particularly focus on using Phase Change Material (PCM)-based cooling to enhance processor energy efficiency. My main contributions in that field include developing a detailed PCM thermal model, demonstrating the feasibility of PCM-based cooling on a hardware testbed, and developing a runtime management policy to maximize the PCM benefits. As part of another research project, I worked on optimizing data center energy efficiency through cooling and performance-aware job allocation strategies. My publications include a book chapter, 2 journal papers, and 5 conference papers. During my PhD studies, I had a chance to attend many conferences (including DAC, InterPack, IGCC, ICCD, VLSI-SoC) and present my work. I was one of the recipients of Richard Newton Young Student Fellow Award at DAC, in 2014. In 2015, I was granted a fellowship to attend PhD Forum at DAC.

My work experience includes two internships. I interned at Advanced Micro Devices (AMD), MA, for 7 months in 2013. There, I worked on thermal modeling and design space exploration of PCM-based cooling systems. For my second internship, I was at Sandia National Labs, NM, for 3.5 months in 2015. I worked on the development of SST, which is a large-scale data center simulator.

I will soon be choosing a career path either as a faculty in academia or as a researcher in an industrial facility, which I am considering equally at the moment. My main motivation to attend Rising Stars is to get recommendations, insightful information regarding both paths from the experts, especially women, working in the area. I believe that such venues are not only great resources for networking, but also they provide valuable mentoring opportunities from people at various stages of their careers. For example, we do not get the chance to hear a talk about “Work/Personal Life Balance” from a women’s perspective at other workshops in computing. Moreover, professional success does not merely depend on the technical quality of the work. Claiming your ideas with confidence, owning your accomplishments, and negotiation skills will have a significant impact on securing a good position after graduation as well as on the rest of the work life. By attending Rising Stars, I hope to gain insight on developing such skills through a collection of talks, mentoring sessions and panels.

Position: Research Associate

Current Institution: Pennsylvania State University

Abstract:
Stopping the disaster in interdependent networks: predicting, monitoring and recovering

Large-scale failures in data communication networks due to natural disasters such as Hurricane Katrina in 2005 can affect the communicating entities in the network and put in danger lives of people in that area. A large body of works investigates prediction of failure propagation, or design of recovery methods with the aim of stopping the disaster. Although for a single network most of the problems have already been studied, there are still many unsolved issues that should be tackled in the case of inter-dependent networks. To fill this gap, we investigate useful approaches to stop a massive disruption in interdependent networks in three different phases: predicting the propagation of failures in interdependent networks, monitoring of failures, and recovering from existing failures.

To fill this gap, we investigate useful approaches to stop a massive disruption in interdependent networks in three different phases: predicting the propagation of failures in interdependent networks, monitoring of failures, and recovering from existing failures.

A wide range of studies has been carried out to investigate the propagation of phenomena across networked systems. These works investigate the size of giant component and did not look into the evolution of propagation over time. Also, they focused on one specific model of propagation and not a general model that can incorporate different scenarios in one shot, so their approaches may be useful for that specific model of propagation. We propose a generalized model and study the propagation of failures over time. We study how initial failure is propagated among the nodes inside each network and across multiple networks for a general threshold model of propagation. Our analysis allows us to determine the most influential nodes in the propagation of failures and predict the behavior of propagation depending on the network coupling models. Therefore, a preventive approach is presented by protecting the most influential nodes of the network. Our results indicate that by making only 5% of the nodes resistant to the phenomena propagation, we may be able to stop the propagation of phenomena from one network to the other network when less than 10% of the nodes are affected.

In the second phase, we consider the problem of placing services in a telecommunication network in the presence of failures, and with the goal of failure monitoring. In contrast to existing service placement algorithms that focus on optimizing the quality of service (QoS), we consider the performance of monitoring failures from end-to-end connection states between clients and servers, and investigate service placement algorithms that optimize the monitoring performance subject to QoS constraints. Our evaluations based on real network topologies verify the effectiveness of the proposed algorithms in improving the monitoring performance compared with QoS-based service placement. Finally, we study network recovery after a massive disruption assuming that only partial knowledge of the failure area is available. The goal is to introduce optimal recovery algorithms that can reduce the number of unnecessary repairs when only partial knowledge is available after the disruption. Our initial results show that the proposed algorithms outperform the state-of-the-art recovery algorithms in the event of uncertain network failures while we can configure our choice of

Finally, we study network recovery after a massive disruption assuming that only partial knowledge of the failure area is available. The goal is to introduce optimal recovery algorithms that can reduce the number of unnecessary repairs when only partial knowledge is available after the disruption. Our initial results show that the proposed algorithms outperform the state-of-the-art recovery algorithms in the event of uncertain network failures while we can configure our choice of trade-off between the complexity and accuracy of the algorithm.

Bio:

Hana Khamfroush is a research associate at the Computer Science and Electrical Engineering Department of Penn State University. Prior to this, Hana was served as a postdoctoral scholar for one year at the computer science department of Penn State University working with Prof. Thomas La Porta. Hana received her PhD with highest distinction from University of Porto, Portugal and in collaboration with Aalborg University, Denmark in Nov 2014. She received her B.Sc. and M.Sc. degrees in Electrical engineering from Iran in 2005 and 2009, respectively. Her PhD research focused on network coding for cooperation in dynamic wireless networks. Currently at Penn State University, she is working on security of interdependent networks, and network recovery after massive disruptions. Her research interests include complex networks, communication networks, wireless communications, and mathematical modeling and analysis. Hana received a four-year scholarship from the ministry of science of Portugal for her PhD, and was awarded many travel grants and fellowships from the European Union and others. She has served on the technical program committee of IEEE ICC, IEEE PIMRC, and EW conferences, and as reviewer for many prestigious Journals and Conferences including IEEE JSAC, IEEE Transactions on Communications, and Elsevier COMNET. Hana was recently selected as the social media co-chair of N2Women community, where she initiated a series of online discussions for women in computer science to discuss gender issues.

She was invited as a qualified young researcher to participate in Heidelberg Laureate Forum (HLF) 2016.

Email

Website

Position: Graduate Research Assistant/Ph.D. Student

Current Institution: Georgia Institute of Technology

Abstract:

Kinetic tremors in conditions such as essential tremor affect patient movements that require high degrees of dexterity and precision. Common methods of treatment are medications (primidone, beta-blockers) and thalamic deep brain stimulation. Peripheral nerve stimulation has also been tried in patients with treatment-resistant tremor, but this technique has not been extensively used in patients because of the bulkiness of the stimulation systems used in these experiments, and the perceived lack of efficacy of this treatment modality. Therefore, we developed a wireless wearable stimulation system that uses 3-D accelerometric measurements of arm tremor characteristics for closed-loop optimization of stimulation parameters. The motion sensor data are wirelessly sent to a PC or smartphone that then analyzes tremor movements. The constant voltage mode stimulator is powered by a 3.7 V rechargeable Li-ion battery, and can generate pulses up to ± 25 V in amplitude. All custom designed electronics (18 x 28 mm2) are enclosed in a wrist-watch sized container.

Two subjects (19 and 20 years old) with kinetic tremor participated in this study. Round surface electrodes were placed on two sites over the radial and ulnar nerves. We initially adjusted stimulation amplitudes so that they were noticeable, but not uncomfortable, using 200 µs wide biphasic stimuli. We changed the amplitudes, frequency, and duty cycle of the stimuli, as well as the duration of the inter-pulse train interval while observing the amplitude of the kinetic tremor when subjects moved a small object from one cup to another with a spoon. The frequency, amplitude, and phase shift of the tremor were analyzed before and during the stimulation were compared. We found that the tremor amplitude was reduced by up to 63% when 5 stimuli (100 Hz) were applied with 500 ms inter-stimulation intervals. The placement of the electrodes and skin impedance differed between subjects, so that the stimulation parameters may have to be individualized for each patient. In future studies, we will develop an automated method of optimization of electrode positions and stimulation parameters, based on tremor characteristics.

Bio:
Jeonghee Kim is a Ph.D. student in the School of Electrical and Computer Engineering at Georgia Institute of Technology in Neurolab advised by Dr. Stephen P. DeWeerth. She received B.S. degrees in electrical engineering from Kyungpook National University in Daegu, Korea, and the University of Texas at Dallas in Richardson in 2007 and 2008, respectively, and an M.S. degree in electrical engineering and computer science at the University of Michigan in Ann Arbor in 2009. Her research interests are system design for biomedical and rehabilitation system in real-time closed-loop and embedded mobile applications, human computer interaction, and assistive technologies.

Position: Ph.D. Candidate

Current Institution: University of Illinois at Urbana-Champaign

Abstract:
Information theory and machine learning techniques for emerging genomic data

The completion of the Human Genome Project in 2003 opened a new era for scientists. Through advanced high throughput sequencing technologies, we now have access to a large amount of genomic data and we can use it to answer key biological questions, such as the factors contributing to the development of cancer. Large data sets and rapidly advancing sequencing technology pose challenges for processing and storing large volumes of genomic data. Moreover, the analysis of datasets may be both computationally and theoretically challenging because statistical methods have not been developed for new emerging data. In this work, I address some of these problems using tools from information theory and machine learning.

First, I focus on the data processing and storage aspect of metagenomics, the study of microbial communities in environmental samples and human organs. In particular, I introduce MetaCRAM, the first software suite specialized for metegenomic sequencing data processing and compression and demonstrate that MetaCRAM compresses data to 2-13 percent of the original file size.

Second, I analyze a biological dataset assaying the propensity of DNA sequence to form a four-stranded structure called “G-quadruplex” (GQ). GQ structures have been proposed to regulate diverse key biological processes including transcription, replication, and translation. I present main factors that lead to GQ formation, and propose highly accurate linear regression and Gaussian process regression models to predict the likelihood of a DNA sequence to fold into GQ.

Bio:
Minji Kim is a Ph.D. Candidate in the Electrical and Computer Engineering department at the University of Illinois at Urbana-Champaign, advised by Professor Olgica Milenkovic and Professor Jun Song. She received her BS in Electrical Engineering and Mathematics (Honors with Distinction) from the University of California, San Diego. Her research interests are in bioinformatics and computational biology, specifically in processing and analyzing genomic data using tools from information theory and machine learning. She is a recipient of the NSF Graduate Research Fellowship and Gordon Scholarship, a finalist of the Qualcomm Innovation Fellowship, and a member of Tau Beta Pi.

Email

Website

Position: Ph.D. Student

Current Institution: Carnegie Mellon University

Abstract:
Transport-Based Morphometry in Radiology and Applications

Patient care has come a long way since Paul Lauterbur and Sir Peter Mansfield invented magnetic resonance imaging (MRI) in the 1970s. Today, imaging studies are a vital part of accurate medical diagnosis and treatment for every part of the body, from assessing cardiac function to monitoring cancer metastases, checking for bone fractures and assessing brain damage after stroke. Each of these images contains rich, detailed and complex information about the mysterious human body. The key challenge is to extract meaningful information from the deluge of image data. Human visual inspection is the traditional method to interpret these images intelligently. However, with increasing modalities of imaging, resolution, and numbers of studies ordered, there is a growing need for machine vision techniques. Beyond simple automation, machine vision techniques are needed to decipher hidden processes that elude interpretation by human vision. The goals are to identify disease states that escape human detection, and model the changes enabling sensitive differentiation. Understanding the hidden changes would have far reaching impact on early diagnosis, understanding inscrutable medical diseases, and contributing objective measures to help clinical assessment.

Current morphometry techniques often preclude direct biological interpretation of differences enabling classification. In these approaches, the models transforming images to a representation in feature domain are non-invertible; thus, statistical functions constructed in the feature domain cannot be mapped to images that illustrate discriminating changes. We seek a modeling approach that enables invertible, nonlinear transformation of data to a representation that streamlines information extraction through machine learning. Such an approach would also enable direct visual interpretation of the changes leading to sensitive classification or regression models.

Our technique, Transport-Based Morphometry (TBM), addresses the limitations of traditional morphometry approaches. TBM is based on the mathematics of optimal mass transport (OMT) and enables fully automated, data-driven analysis and statistical results that are easily interpreted biologically. We extend the mathematics of OMT to enable application of TBM to 3D radiology data. We apply TBM to achieve the state of the art in a variety of clinical applications.

Currently, diagnosis of osteoarthritis (OA) cannot be made until symptoms and irreversible damage on x-ray develop. TBM enables detection of OA three years in advance of symptoms with 86% accuracy based on the appearance of cartilage on knee MRIs. Focal damage in the medial condyle is identified as culprit for future progression to OA.

Copy number variants (CNV) in 16p11.2 chromosomal locus are associated with many neurodevelopmental diseases, including autism and epilepsy. TBM allows sensitive prediction of the 16p11.2 genotype based on brain structure alone – 100% accuracy using white matter appearance. TBM identifies changes in white matter distribution (deletion carriers > controls > duplication carriers). Furthermore, in addition to classification, the TBM approach also facilitates regression tasks, showing that aerobic fitness is associated with changes in brain tissue distribution in areas that overlap with those affected in normal aging. In the future, TBM has

Furthermore, in addition to classification, the TBM approach also facilitates regression tasks, showing that aerobic fitness is associated with changes in brain tissue distribution in areas that overlap with those affected in normal aging. In the future, TBM has potential to bridge the knowledge gap between structure and function in a wide range of diseases.

In the future, TBM has potential to bridge the knowledge gap between structure and function in a wide range of diseases.

Bio:

I am an MD-PhD student in the Medical Scientist Training Program (MSTP) at the University of Pittsburgh and Carnegie Mellon University. Currently I am in my third year of PhD at CMU, working under the supervision of Prof. Gustavo Rohde, who holds joint appointments in Electrical Engineering and Biomedical Engineering.

Prior to joining the MSTP program, I earned my B.S. and M.S. degrees in Electrical Engineering from Stanford University at the ages of 19 and 20, respectively.

My PhD research involves computer-aided extraction of anatomical information from high-resolution medical imaging data to aid in medical diagnostics. Interpreting subtle patterns in high resolution images eludes human visual inspection, yet we need to look across images from multiple subjects for data-driven learning – which is the subject of my research. This work has drawn attention from various bodies as I have received the Philip and Marsha Dowd Graduate student fellowship award, Hertz Foundation Finalist award, and a university-wide 3-minute thesis competition award. I am the author of 8 peer-reviewed publications, and of an additional 5 journal papers under various stages of review and preparation.

I am preparing to become a professor, a career path that I am motivated to pursue because it will enable me to combine my passion for image technology and signal processing research with my desire to improve patient care. My objective is to become a leading expert in biomedical imaging technology.

I became interested in pursuing an MD-PhD after I begin taking classes in signal processing. From k-space and Fourier transform in MRI, to projection slice in CT, I began to see the intimate relationship between signal processing theory and medical imaging. I subsequently pursued an internship at GE Healthcare, where I wrote software to remotely focus x-ray machines. There, I was inspired by the wide-reaching impact that engineers had on modern patient care.

However, I also became acutely aware of the gender imbalance in the fields I wanted to pursue. At Stanford, I only had two female EE professors – the rest were all male. Currently, in my spare time, I am a coordinator of the Women in Science and Medicine Association (WSMA). We work to connect young female scientists to mentors and discuss the challenges at the intersection of pursuing a career in academia and being a woman in STEM.

Position: PhD Student

Current Institution: Stanford University

Abstract:
Each year judges all across the United States make millions of decisions about whether criminal defendants should be released or detained in jail as they await final adjudication of their cases. While these predictions are currently made by judges mentally processing information about past cases, the growing digitization of case records raises the possibility of using new techniques from machine learning instead. In this talk, I will present our research on building machine learning algorithms which can complement human decision making in decision critical settings such as judiciary. The release rules guided by predictions of these machine learning algorithms result in lower failure rates among released defendants compared to release decisions of judges. Furthermore, I will also discuss how machine learning can provide us with diagnostic insights into the patterns of mistakes made by judges. Lastly, I will provide an overview of some of our recent research which involves developing interpretable machine learning models which can be used to explain patterns of defendant behavior to judges.

Bio:

Himabindu Lakkaraju is a PhD student in Computer Science at Stanford University. Her research focuses on building machine learning algorithms which can complement human decision making in domains such as judiciary, health care and education. Some of her notable research involved developing prediction models which can help judges with bail decisions. She also developed a series of machine learning models which can provide diagnostic insights into the patterns of mistakes made during the process of decision making. More recently, she has been focusing on building interpretable prediction models which not only optimize for predictive power but also emphasize on explaining how the algorithm is making those predictions. Such models allow domain experts to validate the prediction logic which in turn increases their confidence in the efficacy of the model thus bridging the gap between machine learning and its application to critical decision making. Her research is being supported by a Robert Bosch Stanford Graduate Fellowship and a Google Anita Borg Scholarship. Prior to joining Stanford, Himabindu was a technical staff member at IBM Research where she worked on natural language processing and sentiment analysis. Her research has been published in several top data mining conferences such as KDD, ICDM, SDM, CIKM etc. Her research contributions were recognized with numerous awards including a best paper award at SIAM International Conference on Data Mining and IBM eminence and excellence award.

Position: Ph.D. Candidate

Current Institution: University of Pennsylvania

Abstract:
Toward Improved Robotic Perception via Online Machine learning

Perceiving the world has been one of the long-established problems in computer vision and robotic intelligence. Whereas human perception works so effortlessly, even state-of-the-art algorithms experience difficulty in performing the same tasks. The two questions that I will address in my talk are as follows:

(1) While many learning techniques emphasize the quantity of data, the underlying difficulty is in recognizing the subset of relevant data to the problem at hand. Due to the advancement of sensing and computing technology, the amount of realtime information a robot can receive and process is enormous. How can a robot estimate the non-stationary environment from this rich data? How can a robot learn online and selectively use noisy information?

(2) In order to enable high-level or interactive tasks in the real 3D world, both geometric and semantic scene understanding are critical. Humans are known to have two distinct visual processing systems called dorsal stream and ventral stream, which are often called “where” and “what” pathways respectively. As opposed to conventional approaches in computer vision that have parallelized the two issues, my study is motivated by the crosstalk between the two systems: Can a robotic visual system bootstrap the learning of both spatial information and the attributes of an object of interest?

With these questions in mind, my research has focused on how robotic perception can be improved via online learning. In this talk, I will discuss the combined problem of estimation of 3D geometric parameters and learning appearance-based features of objects in an online learning framework and present two case studies. First, I will present a study on monocular vision-based ground surface estimation and classification. The ground (or floor) is the most important background object, which appears everywhere if on land. Being ubiquitous, the ground exhibits diverse visual features depending where you are and when it is. In this study, an online simultaneous geometric estimation and appearance-based classification of the ground is demonstrated using the KITTI benchmark dataset, a large-scale dataset developed for autonomous driving car research. Second, I will talk about a learning approach for efficient model-based 3D object pose estimation. Knowing the precise 3D pose of an object is crucial for interactive robotic tasks such as grasping and manipulation. However, dealing with 3D models and running a 3D registration algorithm on noisy image data is typically expensive. By predicting the visibility of the geometric model and learning discriminative appearance of the object in an online fashion, the suggested method is able to select only relevant part of data stream, which results in high efficiency and robustness in 3D registration. I will conclude the talk with ongoing projects and future works on improving 3D robotic perception via online learning.

Bio:
Bhoram Lee is a PhD candidate at GRASP (General Robotics, Automation, Sensing, and Perception) Lab, University of Pennsylvania, under the supervision of Prof. Daniel D. Lee. Before coming to Penn, she worked at SAIT (Samsung Advanced Institute of Technology) from 2007 to 2013 as a researcher. She received B.S. in mechanical and aerospace engineering in 2005 and M.S. in aerospace engineering in 2007 from Seoul National University (SNU), Korea. Her previous research experience includes visual navigation of UAVs, sensor fusion, and mobile user interactions. Bhoram Lee was a member of GNSS (Global Navigation Satellites Systems) Lab at SNU and her team won the 2nd prize at the 6th Korean Robot Aircraft Competition in 2007. During her years at Samsung, she (co-)authored more than 20 patent applications and was awarded the Samsung Best Paper Award 2012 Bronze prize as the first author. She was involved in many research projects including human pose estimation in AR (augmented reality) environments, and development of mobile motion UIs (user interfaces) and haptic UIs at SAIT. Bhoram Lee recently participated in the DARPA Robotics Challenge in 2015 as a member of team THOR, one of the finalists, and worked on 3D perception. She also has served as a teaching assistant during the past two years for offline and online robotics courses at Penn. Her current academic interest includes probabilistic estimation, robot vision, machine learning, and general robotics with a focus on improving robotic perception via online learning techniques. She currently resides in Havertown, PA, with her husband and their two daughters.

Position: Ph.D. Candidate

Current Institution: MIT

Abstract:
Blind Regression: Nonparametric Regression for Latent Variable Models via CollaborativeFiltering

We introduce the framework of blind regression motivated by matrix completion for recommendation systems: given n users, m movies, and a subset of user-movie ratings, the goal is to predict the unobserved user-movie ratings given the data, i.e., to complete the partially observed matrix. Matrix completion has been well analyzed under the low-rank assumption, in which matrix factorization based approaches have been proven to be statistically efficient, requiring only $rn \log n$ samples, where $r$ is the rank of the matrix. Unfortunately, the low-rank assumption may not hold in practice, as a simple nonlinear transformation of a low-rank matrix could easily produce an effectively high-rank matrix, despite few free model parameters. Following the framework of nonparametric statistics, we posit that user u and movie

Following the framework of nonparametric statistics, we posit that user u and movie i have features x_1(u) and x_2(i) respectively, and their corresponding rating y(u,i) is a noisy measurement of f(x_1(u), x_2(i)) for some unknown function f. Whereas the matrix factorization literature assumes a particular function f(a,b) = a^T b, we relax this condition to allow all Lipschitz functions. In contrast with classical regression, the features x = (x_1(u), x_2(i)) are not observed, making it challenging to apply standard regression methods to predict the unobserved ratings.

Inspired by the classical Taylor’s expansion for differentiable functions, we provide a prediction algorithm that is consistent for all Lipschitz functions. In fact, the analysis through our framework naturally leads to a variant of collaborative filtering, shedding insight into the widespread success of collaborative filtering in practice. Assuming each entry is sampled independently with probability at least \max(m^{-1/2+\delta},n^{-1+\delta}) with \delta > 0, we prove that the expected fraction of our estimates with error greater than \epsilon is less than \gamma^2 / \epsilon^2 plus a polynomially decaying term, where \gamma^2 is the variance of the additive entry-wise noise term. Experiments with the MovieLens and Netflix datasets suggest that our algorithm provides principled improvements over basic collaborative filtering and is competitive with matrix factorization methods.

We show that the algorithm and analysis naturally extend to higher order tensor completion under similar model assumptions. We can reducing tensor completion to matrix completion by flattening the dimensions and verifying that the required model assumptions still hold. We show that our simple and principled approach is competitive with respect to state-of-art Tensor completion algorithms when applied to image inpainting data. Our estimator is naively simple to implement, and its analysis sidesteps the complications of non-unique tensor decompositions. The ability to seamlessly extend beyond matrices to higher order tensors suggests the general applicability and value of the blind regression framework.

Bio:

I am a current PhD candidate at MIT in the Laboratory for Information and Decision Systems (LIDS) at Massachusetts Institute of Technology. I am advised by Professors Asuman Ozdaglar and Devavrat Shah in the Department of Electrical Engineering and Computer Science. Before coming to MIT, I received my B.S. in Computer Science from California Institute of Technology in 2011. I received a M.S. in Electrical Engineering and Computer Science from MIT in May 2013. I received the MIT Irwin Mark Jacobs and Joan Klein Jacobs Presidential Fellowship in 2011, and the National Science Foundation Graduate Research Fellowship in 2013.

I have worked on sparse matrix methods, specifically how to exploit sparsity or graph properties to approximate a single component of the solution to a linear system, or the largest eigenvector of a stochastic matrix. In the context of Markov chains, we proposed and analyzed a truncated Monte Carlo method, in which the algorithm samples node-centric random walks. The algorithm exhibits a behavior in which it trades-off between estimation accuracy and time complexity, thresholding nodes with low stationary probability, and obtaining more accurate estimates for nodes with high stationary probability, while maintaining bounded number of random walk steps. In the context of solving linear systems, we provided an analysis showing that an asynchronous implementation of coordinate descent converges to an estimate for a single component of the solution, where the convergence rate is a function of the sparsity of the matrix in addition to the condition number.

Recently, I have been interested in building a mathematical framework for social data driven decisions. This includes both preference learning, which may involve regression techniques and latent variable modeling, as well as learning structural relationships in the data, which involves spectral graph theoretic methods. We developed a nonparametric regression framework to design algorithms for estimating latent variable models in the context of matrix completion. Our method and analysis provides theoretical justification for the popularly used heuristic of collaborative filtering, and leads to natural extensions to higher order tensor completion. We are continuing to explore how to reduce sample complexity to handle unique sparsity challenges that arise in online matching market contexts. I also hope to explore connections between recommendation systems and social effects that result from human user responses to the system. Through of both the inference as well as human decision aspect of this problem, I hope to build a more encompassing and systematic approach to designing recommendation systems.

Position: Research Assistant, Teaching Assistant

Current Institution: Princeton Universtiy

Abstract:
Rethinking Privacy in Information Networks and IoT systems

Information sharing is key to realizing the vision of a data-driven customization of our environment. Data that were earlier locked up in private repositories are now being increasingly shared for enabling new context-aware applications, better monitoring of population statistics, and facilitating academic research in diverse fields. However, sharing personal data gives rise to serious privacy concerns as the data can contain sensitive information that a user might want to keep private. Thus, while on one hand, it is imperative to release utility-providing information, on the other hand, the privacy of users whose data is being shared also needs to be protected.

Various privacy metrics including differential privacy have been proposed in the literature to provide mathematical foundation for defining and preserving privacy, which have received considerable attention. However, previous privacy frameworks implicitly assume the independence of data tuples, the static database and the sensitiveness of the data itself, while in contrast the data dependence, data dynamics and sensitive inferences computed over the data are ignored in previous privacy frameworks. These three impractical assumptions become even worse in today’s big data era where tuples within a database present close correlation, large volume, rich semantics and complex structure. Therefore, we need to break these three impractical assumptions and incorporate data dependence, data dynamics and sensitive inferences computed over the data to formulate effective privacy frameworks.

First, tuple independence in the database is a weak assumption in previous privacy frameworks especially because tuple dependence occur naturally in database due to social interactions between users. For example, in a social network graph (with nodes representing user, and edges representing friendship relation), the friendship between two nodes, not explicitly connected in the graph, can be inferred from the existence of edges between other nodes. To effectively incorporate tuple dependence, we propose our dependent differential privacy (in our NDSS 2016 paper) as an important generalization of the existing differential privacy framework. Second, previous privacy frameworks only consider static database while ignoring the data dynamics. In reality, the sequence of perturbed database obtained by these static privacy frameworks provide significantly more observations to an adversary than just a single perturbed database. To defend against such strategic adversaries, we propose our LinkMirage system (in our NDSS 2016 paper) to incorporate data dynamics to form practical perturbation mechanisms in reality. Finally, for certain kind of data such as sensor data in IoT systems, the private information is not the data itself but the sensitive inferences computed over the data. Previous work which directly consider the data itself as private would be too strong to provide rigorous privacy guarantees violating utility performance. Therefore, we propose DEEProtect system (in our CCS 2016 submission) which allows users to specify their privacy and utility preferences in terms of higher level inferences and automatically translates these preferences to fine-grained perturbation policies that can be applied to the sensor data in IoT systems at runtime.

Bio:
I am a PhD student in the department of Electrical Engineering in Princeton University starting from September 2013. My advisor is Prof. Prateek Mittal. I am interested in building secure and privacy preserving systems. My current interests include the domains of privacy enhancing technologies, Internet-of-Things (IoT) security, trustworthy social systems, and network security. Specifically, I am very interested in 1) privacy enhancing technologies, such as big data privacy and differential privacy; 2) other security problems such as IoT security and Sybil defenses. I am also very interested in machine learning and signal processing techniques, such as deep learning and compressive sensing. As the first author, I have published two papers in the Network and Distributed System Security Symposium (NDSS 16) and one paper in ACM Conference on Computer and Communications Security (CCS 15) (both of the two conferences are among the top venues in security community). During my PhD study, I have won IBM PhD fellowship in 2016, Princeton Early PhD Award in 2015, Anthony Ephremides Fellowship in 2014 and Princeton First-year Graduate Student Fellowship in 2013. I also worked as a research intern in IBM T. J. Watson Research Center for the summer of 2015 and 2016 in the Networking and Cloud Computing Group and Cognitive IoT and Distributed Analytics Group, respectively. Prior to my PhD study, I obtained both my Master’s and Bachelor’s degree in the University of Science and Technology of China (USTC) in 2013 and 2010, respectively. During my study in China, I have won several scholarships and honors including Guo Moruo Scholarship which is the top scholarship in USTC and National Scholarship (for twice) which is the top scholarship in China.

Email

Website

Position: Graduate Student Researcher

Current Institution: University of California at Berkeley

Abstract:

The widespread expectation that autonomous sensor networks will fuel massively accessible information technology, such as the Internet of Things (IoT), comes with the daunting realization that huge numbers of sensor nodes will be required, perhaps approaching one trillion. Needless to say, besides cost, energy will likely pose a major constraint in such a vision. The wireless module in a typical sensor node consumes 30mW of power, of which half is spent on the receiver alone. The power-hungry transceiver calls for sleep/wake strategy which requires additional timing and control system that also consumes 1uW of extra power. A low-cost printed battery with 1J of energy would only last 11.5 days even when the sensor node is at sleep with only the sleep/wake control system running. On the other hand, if the receiver could consume zero quiescent power, the sensor node can listen without draining any battery at all times. The trillion-sensor wireless network would suddenly become feasible. A first-in-kind all-mechanical communication receiver front-end employing resonant micromechanical switch (i.e., resoswitch) technology has detected and demodulated frequency shift keyed (FSK) signals as low as −60dBm at a VLF frequency of 20kHz suitable for extremely long-range communications, all while consuming zero quiescent power when listening. The key to attaining high quality signal reception and demodulation with zero quiescent power consumption derives from the use of heavily nonlinear amplification, provided by mechanical impact switching of the resoswitch. This approach would be inconceivable in a conventional receiver due to performance degradation caused by nonlinearity, but becomes plausible here by virtue of the RF channel-selection provided by the resonant behavior of the mechanical circuit.

Bio:
Ruonan Liu attended the Ohio State University in 2008 and received her B.S. degree in Electrical and Computer Engineering with honor in 2011. She is currently a fifth year Ph. D. student in the University of California at Berkeley. Her research focus on ultra-low power wireless communications using MEMS resonators. MEMS device consumes zero quiescent power and has extremely high quality factor on the order of tens of thousands making it ideal for low power wireless receiver.

Email

Website

Position: Ph.D. Student

Current Institution: University of Washington in St. Louis

Abstract:
Federated Scheduling for Real-Time Systems with Parallel Tasks

Real-time scheduling is used in many cyber-physical systems where applications interact with humans or the physical environment. The scheduling of real-time tasks is different from traditional scheduling because real-time tasks have deadlines, usually according to the need in physical world. Therefore, the scheduling theory and system must provide guarantees assuring that all deadlines can be met. For more than four decades, researchers have developed theories to provide real-time guarantees for sequential tasks on single and multi-processor machines. Recently, parallel tasks with real-time needs have been studied to keep up with the demands of emerging computation intensive tasks and to use multi-core hardware platforms more effectively. However, parallel tasks present new challenges due to their potentially complex dependence structures.

This work proposed and analyzed a new scheduling algorithm, named federated scheduling, for real-time systems with parallel tasks. Federated scheduling calculates the minimum number of cores a parallel task required for meeting its deadline and assigns these dedicated cores to the parallel task. Federated scheduling is proven to have the best theoretical performance. In addition, since tasks are executed without interference from other tasks, federated scheduling also reduces scheduling overheads in practice. Based on above theoretical results, we implemented a federated scheduling system and show that with this system we are now able to perform higher computation within stringent deadlines. In hybrid-testing framework, for example, this allows us to run simulation with larger and more accurate structures.

Bio:
Jing Li is a 5th-year PhD student at Washington University in St. Louis, co-advised by Professors Kunal Agrawal and Professor Chenyang Lu. She is interested in designing theoretically sound and practically efficient schedulers for multiple parallel jobs. More specifically, she designs and analyzes schedulers for systems with dynamically arriving parallel jobs to provide various quality of service guarantees, such as optimizing different latency related objectives for cloud computing platforms (such as Bing search servers) and meeting deadlines for real-time systems (such as autonomous vehicles and other cyber-physical systems). Her work improves our theoretical understanding of how to schedule parallel tasks in multi-core systems to meet specific performance guarantees, but she is not stopping there and is actively involved in
“closing the loop”, i.e., ensuring that this improved theoretical understanding also impacts how
we implement the schedulers in real systems. Her research output includes three journal papers in two of the top venues in her field (IEEE Transactions on Parallel and Distributed Systems, and Real-Time Systems) and eight conference publications in some of the top venues (RTAS’13, ECRTS’14, RTCSA’14, SODA’16, PPoPP’16, SPAA’16, ECRTS’13 and RTAS’16 – the latter two having earned two Outstanding Paper Awards).

Email

Website

Position: Founder, CEO

Current Institution: Agile Focus Designs

Abstract:
Agile focusing in active optical systems

Imaging systems have traditionally relied upon translation of lenses for focus control. However, this proves slow and difficult to miniaturize for small form factor imaging systems such as endoscopes and cell phone cameras [1]. Microelectromechanical systems (MEMS) mirrors present an electronic means for fast focus control, while maintaining a small form factor. In this presentation I will describe novel, low-actuator-count MEMS mirrors for fast focus control with concurrent management of attendant spherical aberration in a transmission microscope [2], a disk read head [3], and a confocal microscope [4]. In a recent work, I synchronized the MEMS mirror with the fast scan axis to obtain oblique plane images of Drosophila larvae, which is much faster than acquiring an x-y-z stack for later processing [4]. Additionally, I showed improvement of point-spread-functions over 120 mm focal range at 0.55 NA with active focusing and correction of spherical aberration. SPIE selected the work as having great potential to impact health care.

The capability of these mirrors for agile focus control over a 3D surface could significantly improve imaging of time-sensitive biological phenomenon, optical biopsy during medical procedures, 3D printing or scanning of small objects, and image feature tracking or stabilization. Their performance has also been improved by a capacitive sensing/control scheme that has increased the usable stroke range (inversely proportional to focal length) of these devices by more than 50 % [6]. Applied Optics has accepted a paper that explores the range of lower order and higher order spherical aberration correction these MEMS mirrors can demonstrate while focusing, as well as a training scheme and demonstration for use in scanning systems [7]. I will also discuss an aberration analysis that is based on the characteristic function of the optical system that outlines the inherent capabilities and limitations of deformable mirrors. This analysis should be beneficial for new designs of optical instruments that incorporate active focusing mirrors, leveraging their precision, speed, and small size to build more functional and useful instruments for biomedicine and industrial imaging applications. The presentation will include recent efforts under an NSF Phase I SBIR (150k) grant to commercialize 3D imaging technology with novel MEMS mirrors. [1] Dickensheets et al., Intl. Conf. Micro/Nano Optical Engineering (2011). [2] Lukes et al., JMEMS, 22, 94-106 (2013). [3] Lukes et al., SPIE MOEMS, 82520L (2012). [4] Lukes et al., “SPIE BiOS, 89490W (2014). (Selected as translational)[6] Lukes et al., JM3, 8, 043040 (2009). [7] Lukes et al., ” Four-zone varifocus mirrors with adaptive control of primary and higher-order spherical aberration,” accepted by J. of Applied Optics.

[1] Dickensheets et al., Intl. Conf. Micro/Nano Optical Engineering (2011).
[2] Lukes et al., JMEMS, 22, 94-106 (2013).
[3] Lukes et al., SPIE MOEMS, 82520L (2012).
[4] Lukes et al., “SPIE BiOS, 89490W (2014). (Selected as translational)
[6] Lukes et al., JM3, 8, 043040 (2009).
[7] Lukes et al., ” Four-zone varifocus mirrors with adaptive control of primary and higher-order spherical aberration,” accepted by J. of Applied Optics.

Bio:

Sarah J. Lukes enjoys working in an interdisciplinary field where electrical, optical, and mechanical systems benefit biomedical engineering applications. As an undergraduate intern, she developed a computer program and interface for technicians to help diagnose osteoarthritis from radiographs at the Mayo Clinic in Rochester, MN. The accuracy proved better than a radiologist’s measurements of minimum joint space widths, where the results are published. She also worked on 3D test and imaging methods for stents designed for the superficial femoral artery as an R&D intern at Boston Scientific in Minneapolis, MN. After attaining an undergraduate degree in mechanical engineering, she designed vibration reduction systems at cryogenic temperatures for LIDAR applications at S2 Corporation, a start-up company.

She then earned her Ph.D. in engineering with an emphasis in electrical engineering in May of 2015 at Montana State University with support of an NSF Graduate Research Fellowship. During her studies, Dr. Lukes gained expertise in optical MEMS while publishing 14 papers, 10 of which she is first author. She was honored by the university with the Betty Coffey Award for her promotion of women in the College of Engineering, and she was among 12 U.S. students selected by National Nanofabrication Infrastructure to attend its International Winter School in Bangalore, India to gain further nanofabrication expertise and learn about novel technologies that are impacting rural areas.

After graduating with her PhD, Dr. Lukes applied as principle investigator for and was awarded an NSF Phase I SBIR (150k for 6 months) and Montana Stage 1 funding (30k) for her newly founded company, Agile Focus Designs. Her team is working toward commercialization of novel 3D imaging technology. SPIE International Society of Photonics and Optics also invited her to write an upcoming SPIE Spotlight Author eBook entitled, “Dynamic and Agile Focusing in Microscopy: A Review.” Below is a list of selected publications that are not included in the attached research statement.

Email

Website

Position: Ph.D. Student

Current Institution: Lehigh University

Abstract:
Achieving FEC and RLL for Visible Light Communications: A Concatenated Convolutional-Miller Coding Mechanism

Advances in solid-state lighting not only enable light emitting diodes (LED) to be a promising source for future lighting, but also make possible the use of visible light communications (VLC). VLC is desirable due to an unregulated bandwidth, no interference with the existing radio frequency (RF) systems. Consider a practical VLC system using the visible light spectrum from 380 to 780 nm to provide the dual-purpose of illumination and communication. Among the key building blocks are the run-length limited (RLL) code and the forward error correction code (FEC). Run-length limited (RLL) codes are widely used in visible light communications (VLC) to avoid long runs of 1’s and 0’s that potentially cause the flicker. This letter explores the serial concatenation of convolutional codes and Miller codes to simultaneously achieve forward error correction (FEC) and RLL control. Miller codes with high bandwidth efficiency are much ignored in practice due to their disappointing power efficiency. The novelty of this paper is that we identity the merit of this previously-unfavorable RLL code (i.e. trellis structure and soft-decidability), exploit some important coding principles (i.e. interleaved serial concatenation and soft-iterative decoding), and assemble it to a powerful turbo structure that makes it outperform the existing favorable choices. A modified BCJR decoding algorithm is developed for the proposed concatenation.

Bio:
Xuanxuan Lu received the B.E. degree from the School of Telecommunication Engineering,Beijing University of Posts and Telecommunications, Beijing, China, in 2008 and the M.E.degree in information and communication engineering from Zhejiang University (one of the topfive universities in China), Hangzhou, China, in 2011. She is currently pursuing the Ph.D. degreeat the Department of Electrical and Computer Engineering, Lehigh University, Bethlehem, PA,USA. Her research interests include signal processing for multiuser system, cooperativecommunications, and visible light communications. She served as a reviewer for the famousjournals in the field of communication engineering, like IEEE TRANSACTIONS ONWIRELESS COMMUNICATIONS, IEEE TRANSACTIONS ON COMMUNICATIONS and soon. She also served as a TPC member and reviewer for the international conferences in her area.She was the recipient of Lehigh University Dean’s Doctoral Student Assistantship from 20112012, and Rossin Doctoral Fellowship from 2013-2014. She has a solid technical backgroundand a strong desire to become an accomplished scholar. She is very sincere about research, quickto grasp new ideas and concepts, well-organized, and hard-working. Her proliferate work hasalready resulted in more than 20 papers published in international top journals and conference.Her work is widely cited by the researchers in the same area.

Position: Ph.D. Candidate

Current Institution: University of Southern California

Abstract:
Percentile Policies for Tracking of Markovian Random Processes with Asymmetric Cost andObservation

Motivated by wide-ranging applications such as video delivery over networks using Multiple Description Codes (MDP), congestion control, rate adaptation, spectrum sharing, provisioning of renewable energy, inventory management and retail, we study the state-tracking of a Markovian random process with a known transition matrix and a finite ordered state set. The decision-maker must select a state as an action at each time step in order to minimize the total expected (discounted) cost. The decision-maker is faced with asymmetries both in cost and observation:

in case the selected state is less than the actual state of the Markovian process, an under-utilization cost occurs and only partial observation about the actual state (i.e. an implicit lower bound characteristic on the actual state) is revealed; otherwise, the decision incurs an over-utilization cost and reveals full information about the actual state. We can formulate this problem as a Partially Observable Markov Decision Process (POMDP) which can be expressed as a dynamic program (DP) based on the last full observed state and the time of full observation. This formulation determines the sequence of actions to be taken between any two consecutive full observations of the actual state, in order to minimize the total expected (discounted) cost. However, this DP grows exponentially, with little hope for a computationally feasible solution. We present an interesting class of computationally tractable policies with a percentile threshold structure. Among all percentile policies, we search for the one with the minimum expected cost. The result of this search is a heuristic policy which we evaluate through numerical simulations. We show that it outperforms the myopic policies and under some conditions performs close to the optimal policies. Furthermore, we derive a lower bound on the cost of the optimal policy which can be computed with low complexity and give a measure for how close our heuristic policy is to the optimal policy.

Bio:
Parisa Mansourifard is a PhD candidate at Electrical Engineering department in University ofSouthern California. She received her Bacholar of Science and Master of Science in ElectricalEngineering from Sharif University of Technology, Iran, in 2008 and 2010, respectively. Shejoined University of Southern California in 2011 with Viterbi fellowship where she is currentlypursuing a Ph.D. degree. She also received her second Master of Science in Computer Sciencefrom University of Southern California in 2015. She held American Association of universitieswomen (AAUW) dissertation fellowship for 2015-2016. Her research interests include decisionmaking, stochastic control and optimization, and intersection of optimization and learningtheory. In her research, she aims to solve critical optimization problems in various networks suchas inventory management or communication networks where there is a mismatch betweendemands and resources causing undesired costs. 

Position: Ph.D. Graduate Student Researcher

Current Institution: University of California, San Diego

Abstract:
Vision for Intelligent Vehicles & Applications (VIVA): Face Challenge

Intelligent vehicles of the future are that which, having a holistic (i.e. inside and outside the vehicle) perception and understanding of the driving environment, make it possible for passengers to go from point A to point B safely and in a timely manner. This may happen by way of providing active assistance for drivers, giving full control to automated cars or some combination of the two. No matter how, a holistic perception and understanding of inside and outside the vehicle is absolutely necessary, and vision based techniques are expected to play an increasing role in this holistic view. The question is, how well do proposed vision techniques work in order to be used in time and safety critical driving situations?

Vision for intelligent vehicles & applications (VIVA) is a challenge set up to serve two major purposes. First is to provide the research community with a common pool of naturalistic driving data of videos from looking –inside and looking-outside the vehicle to present the issues and challenges from real-world driving scenarios. Second is to challenge the research community to highlight problems and deficiencies in current approaches and simultaneously, progress the development of future algorithms. There are benchmarking competitions and databases available for general vision problems, such as the KITTI Vision Benchmark Suite, which in comparison is the closest to VIVA due to data collected from driving. However, one of the major difference is, VIVA contains datasets and challenges from looking-inside, while KITTI does not.

With a special focus on challenges from looking inside at the driver’s face (i.e. VIVA-Faces Challenge), the presentation will provide information on how the data is acquired, annotated and released; how methods are compared; and where, when and how to participate in the challenge.

Bio:
Sujitha Martin received the B.S. degree in electrical engineering from the California Institute of Technology, Pasadena, CA, USA, in 2010 and the M.S. degree in electrical and computer engineering from the University of California San Diego (UCSD), La Jolla, CA, USA, in 2012. She is currently working toward the Ph.D. degree in the Laboratory for Intelligent and Safe Automobiles, Computer Vision and Robotics Research Laboratory, UCSD, under the guidance of Professor Mohan M. Trivedi. Her research interests include computer vision, machine learning, human–computer interactivity, big data, facial analysis and gesture analysis. Ms. Martin has published 20 papers in and peer reviewed at least 10 papers for distinguished conferences (e.g. IEEE Intelligent Vehicles Symposium (IV)) and journals (e.g. IEEE Transactions on Intelligent Transportation Systems (T-ITS)). Along with her colleagues, her

Ms. Martin has published 20 papers in and peer reviewed at least 10 papers for distinguished conferences (e.g. IEEE Intelligent Vehicles Symposium (IV)) and journals (e.g. IEEE Transactions on Intelligent Transportation Systems (T-ITS)). Along with her colleagues, her
paper titled, “Head, Eye, and Hand Patterns for Driver Activity Recognition,” was one of the four finalists for the Best Industry Related Paper Award (BIRPA) at the International Conference on Pattern Recognition (ICPR), held in Stockholm, Sweden, in August of 2014. Another one of her works titled “Vision Challenges in Naturalistic Driving Videos,” was honored with the National Science Foundation travel award for presenting the work at The Future of Datasets in Vision Workshop held in conjunction with the IEEE Computer Vision and Pattern Recognition (CVPR) Conference in Boston, MA in June, 2015.

She is actively getting involved in and contributing to the research community by serving in technical program committees for workshops (i.e. IV and CVPR), and organizing and co-chairing workshops (i.e. two years in a row of Vision for Intelligent Vehicles & Applications (VIVA) workshop at IV). Ms. Martin also enjoys teaching, which she has shown by serving as a teaching assistant for five classes (e.g. Introduction to Linear and Nonlinear Optimization, Elements of Machine Learning I, and Special Topics in Signal and Image Processing/Robotics and Control System) and more recently by mentoring one undergraduate and one graduate student to submit novel contributions as papers to the IEEE International Conference on Intelligent Transportation Systems (ITSC).

Her interest in academic research started since her undergraduate years at Caltech, where Ms. Martin has participated in summer research programs (e.g. Summer Undergraduate Research Fellowship (SURF), NASA Space Grant) and internships (e.g. JPL, Synaptics) every summer. Her diverse research experience in topics from micro-fluidic chips to outer space to wireless communication to machine vision and robotics, reflects her interest in balancing the theoretical and practical aspects of research. Her Ph.D work in machine vision for intelligent vehicles has exposed her to a good mixture of where theory meets application. Her plans for the future include continuing in this line of research while looking for opportunities to think outside the box.

Email

Website

Position: Researcher

Current Institution: Stanford University

 

Abstract:
Traditional solutions for test and reliability do not scale well for modern designs with their size and complexity increasing with every technology generation. Therefore, in order to meet time-to-market requirements as well as acceptable product quality, it is imperative that new methodologies be developed for quickly evaluating a system in the presence of faults.

In my research, statistical methods have been employed and implemented to (1) estimate the stuck-at fault coverage of a test sequence and evaluate the given test vector set without the need for complete fault simulation, and (2) analyze design vulnerabilities in the presence of radiation-based (soft) errors. Experimental results show that these statistical techniques can evaluate a system under test orders of magnitude faster than state-of-the-art methods with a small margin of error.

In my dissertation, I have introduced novel methodologies that utilize the information from fault-free simulation and partial fault simulation to predict the fault coverage of a long sequence of test vectors for a design under test. These methodologies are practical for functional testing of complex designs under a long sequence of test vectors. Industry is currently seeking efficient solutions for this challenging problem.

I have also developed a statistical methodology for a detailed vulnerability analysis of systems under soft errors. This methodology works orders of magnitude faster than traditional fault injection. In addition, it is shown that the vulnerability factors calculated by this method are closer to complete fault injection (which is the ideal way of soft error vulnerability analysis), compared to statistical fault injection. Performing such a fast soft error vulnerability analysis is very crucial for companies that design and build safety-critical systems.

Bio:
I got my B.Sc. (Computer engineering) from Sharif University of Technology, Tehran, Iran and M.Sc. from University of Tehran, Tehran, Iran in 1998, and 2002, respectively. I joined CAD research group, under supervision of Prof. Zain Navabi, before graduating from Masters degree in 2001 and started developing test tools in there. I joined Prof. Jacob Abraham’s research group at University of Texas at Austin in 2009 and graduated in 2014. My research was mainly on scalable techniques for functional fault grading and reliability evaluation of large systems. In Feb. 2015, I joined Prof. Subhasish Mitra’s group at Stanford University as a Postdoctoral fellow and continued my research on reliable systems (soft errors and transistor aging). Currently, I’m a researcher at Stanford University, continuing the research I started as a postdoc and also working as principal product engineer in BigStream Solutions which is an early-stage startup on accelerating big data applications. My main task at BigStream is to develop the back-end infrastructure (bridge between the accelerator hardware and software/application).

Position: Postdoctoral Researcher

Current Institution: University of California, Berkeley

Abstract:
Computational Models of Natural Language Learning and Processing

People effortlessly learn languages, through observation and without any explicit supervision; however, language learning is a complex computational processes which we do not fully understand. In this talk, I will explain how computational modeling can shed light on the mechanisms underlying semantic acquisition — learning word meanings and their relations — which is a significant aspect of language learning. I introduce an unsupervised framework for semantic acquisition that mimics children: it starts with no linguistic knowledge and processes the input using general cognitive (learning) mechanisms such as memory and attention. I show that by integrating other cognitive mechanisms with word learning, our computational model can better account for child behavior. Specifically, I demonstrate that three important phenomena observed in child vocabulary development (individual differences, the role of forgetting in learning, and learning semantic relations among words) can only be explained when these cognitive mechanisms are integrated with word learning.

Bio:
Aida Nematzadeh is a post-doctoral researcher at the University of California, Berkeley. She received a PhD and an MSc in Computer Science from the University of Toronto in 2015 and 2010, respectively. Aida’s research provides a better understanding of the computational mechanisms underlying the human ability to learn and organize information, with a focus on language learning. Aida has been awarded a NSERC Postdoctoral Fellowship from Natural Sciences and Engineering Research Council of Canada.

Email

Website

Position: Ph.D. Candidate

Current Institution: MIT

Abstract:
Probing and Tuning the Nanoscale Enabling Active Nanodevices

At the nanoscale, unique properties and phenomena emerge leading to scientific and technological paradigms beyond those classically envisioned. My research implements an interdisciplinary approach to precisely probe and tune the nanoscale, study the emerging physical principles and utilize those to develop devices with new and improved functionalities. A particular focus of my work is controlled and reversible tuning of nanostructured configurations to achieve dynamic modulation of electrical and optical properties. Such tunability provides mechanisms that make feasible active nanodevices with broad applications.

Studying the nanoscale necessitates precise development of features few nanometers in dimensions. Conventional fabrication techniques often lack the desired resolution and introduce imperfections interfering with the device function. My research addresses these challenges by developing alternative fabrication techniques in which control of surfaces and interfaces directs the assembly of nanostructured components into larger functional units with nanometer precision. To achieve tunability in these structures an approach I have implemented involves utilizing mechanically conformal components such as organic molecules in the design. In this scheme, conformational changes of the active component through an applied external stimulus result in controlled and reversible tunability of the device architecture and subsequently its performance.

Utilizing these principles, I have demonstrated a tunneling electromechanical switch composed of a sub-5 nm metal-molecule-metal switching gap. In this design, electrostatically-induced mechanical compression of the molecular layer modulates the distance between the electrodes, leading to an exponential increase in the tunneling current that defines an abrupt switching. The molecular layer helps form switching gaps only few nanometers thick, dimensions much smaller than conventionally feasible, thus enabling operating voltages much lower than that of the conventional nanoelectromechanical switches. During the switching operation, the elastic force in the compressed molecular layer can overcome the surface adhesive forces between the approaching electrodes. This force control prevents the permanent adhesion of these components, referred to as stiction, overcoming a common failure mode of electromechanical systems. With low operating voltages and repeatable performance, these switches have promising applications in energy efficient electronics. Conformational changes can also alter the interaction of light with nanostructures. My research has exploited this property to develop dynamically tunable plasmonic structures in which nanometer changes in device conformation result in modulation of the plasmon resonance. These active plasmonics provide a platform for various applications including nanoscale metrology techniques and on-chip optical sources.

Bio:
Farnaz Niroui is a Ph.D. candidate in the Department of Electrical Engineering and Computer Science at Massachusetts Institute of Technology where she works with Professors Vladimir Bulovic and Jeffrey Lang. Her research interest is at the interface of device physics, nanofabrication, and materials science to study, manipulate and engineer devices and systems with unique functionalities at the nanoscale. Farnaz is a recipient of the Natural Sciences and Engineering Research Council of Canada Scholarship for graduate studies. She received her Master of Science degree in Electrical Engineering from MIT in 2013 and completed her undergraduate studies in Nanotechnology Engineering at University of Waterloo in 2011.

Position: Postdoctoral Associate

Current Institution: MIT

Abstract:
Type-directed Program Synthesis

Increasing programmer productivity and improving software quality calls for new ways of describing computation that are more declarative than what mainstream programming languages offer today. The biggest challenge in realizing this goal is bridging the gap between the high-level declarative specifications that developers are expected to provide and the efficient implementations that they have come to expect from mainstream languages. In recent years, Program Synthesis has emerged as a promising technology for bridging this gap by relying on powerful search algorithms to find programs that match a specification. Making synthesis scale to general programming tasks, however, requires significant advances in the ability to decompose specifications: the better a specification for a program can be broken up into independent specifications for its components, the fewer combinations of components the synthesizer has to consider.

This talk will present Synquid, a synthesis-enabled programming language that automatically generates programs from declarative specifications. These programs may be recursive and manipulate nontrivial data structures; for example, Synquid is the first synthesizer to automatically discover provably correct implementations of textbook sorting algorithms, as well as balancing and insertion operations on Red-Black Trees and AVL Trees. At the core of Synquid is a new approach to type-directed synthesis that is able to decompose complex synthesis problems into independent sub-problems and hence efficiently navigate the space of candidate implementations. In particular, the talk will describe local liquid type checking: a new algorithm for refinement type checking, which leverages parametric polymorphism to decompose specifications. Type-directed synthesis techniques also have applications in program
repair: the talk will demonstrate how Synquid’s underlying mechanisms can be employed to
automatically rewrite programs that violate information flow policies.

Bio:
Nadia Polikarpova is a postdoctoral researcher at the MIT Computer Science and Artificial Intelligence Lab interested in helping programmers build error-free software. She completed her PhD in 2014 at ETH Zurich (Switzerland). For her dissertation she developed tools and techniques for automated formal verification of object-oriented libraries. During her doctoral studies Nadia was an intern at Microsoft Research Redmond, where she worked on verifying real-world implementations of security protocols. At MIT, Nadia has been applying type-based verification to automated program synthesis and repair. She received her Bachelor’s and Master’s degrees in Applied Mathematics and Informatics from ITMO University (Saint Petersburg, Russia).

Position: Postdoctoral Researcher

Current Institution: National University of Ireland – Galway

Abstract:
Confounders in Dielectric Properties of Biological Tissues and the Impact on Electromagnetic Medical Applications

The dielectric properties of biological tissues are of fundamental importance to understanding and quantifying the interaction of electromagnetic fields with the human body. These quantities determine the transmission, reflection, and absorption of electromagnetic fields within the body. Accurate knowledge of the dielectric properties of human tissues are vital for many applications. In particular, they are used to evaluate the safety of wireless electronic devices and communications, and in the design and development of electromagnetic medical imaging and therapeutic devices. The dielectric properties often play a role in determining the operating requirements of such devices, including the minimum input (transmitted) power and the functional frequency range. Historically, studies reported in the literature have aimed to establish a database of dielectric properties for many human tissues; however, rather than solidifying existing data, such studies have often produced conflicting results, a fact which is likely attributable to the considerable differences in measurement approaches and techniques used at all stages of dielectric property studies. Dielectric measurements are typically performed by placing an open-ended coaxial probe in contact with the tissue sample, and recording the reflection coefficient with a vector network analyser. While uncertainties occur due to the measurement equipment (e.g. drift, random noise, cable movements), the uncertainties attributed to clinical factors are orders of magnitude higher. Clinical factors result from measurements on tissues in an uncontrolled environment; examples of causes for clinical uncertainties include the quality and pressure of the probe-sample contact, the sample temperature, the ambient humidity, and poor quantification of the types of tissues that are present in heterogeneous samples. This work presents an exhaustive investigation of two key clinical factors, the probe sensing depth and the process of attribution of measured dielectric properties to samples with heterogeneous tissue contents. The findings demonstrate that significant error can be introduced to the dielectric properties of tissues when using common assumptions relating to these two key clinical factors. A framework is presented for quantifying these factors, enabling future dielectric property studies to obtain results that are meaningful, repeatable, and traceable.

Bio:

Emily Porter is a Postdoctoral Researcher and Adjunct Lecturer in the Lambe Medical Device Group at the Translational Research Facility (University Hospital Galway), National University of Ireland-Galway (NUIG). Her research is focused on novel medical applications of electromagnetics. In particular, her interests include bladder and kidney monitoring using electrical impedance tomography, microwave radar for breast cancer diagnosis and treatment, anatomically and electrically realistic phantoms, and standardized dielectric measurements of biological tissues. Such electromagnetic medical devices have significant potential to enhance health diagnosis strategies and treatment outcomes, through non-invasive techniques with minimal side-effects. Dr. Porter is an active member in a European Cooperation in Science and Technology (COST) Action, named “TD1301: Accelerating the Technological, Clinical and
Commercialisation Progress in the Area of Medical Microwave Imaging,” which consists of over
160 members in 26 countries.

Emily Porter studied at McGill University, Montreal, Canada, where she received her M. Eng. in 2010 and her Ph.D. in Applied Electromagnetics in 2015. During her time at McGill University, she also worked as an editor and proofreader for technical publications. Her Ph.D. research focused on the design and implementation of a microwave breast health monitoring device, which is currently undergoing early clinical studies at the McGill University Health Centre’s Breast Clinic at the Royal Victoria Hospital (Montreal). Initial results of the study have been published in IEEE Transactions on Biomedical Engineering and IEEE Transactions on Medical Imaging. Dr. Porter is the recipient of several prestigious national and international awards, including the IEEE Antennas and Propagation Society Doctoral Research Award, the Irish Research Council (IRC) “New Foundations” Grant, the Royal Irish Academy (RIA) Charlemont Grant, the Natural Sciences and Engineering Research Council of Canada (NSERC) Postdoctoral Fellowship, Le Fonds de recherche du Québec – Nature et technologies (FRQNT) Fellowship (Research Fund of Quebec: Nature and Technologies), and the D.W. Ambridge

Dr. Porter is the recipient of several prestigious national and international awards, including the IEEE Antennas and Propagation Society Doctoral Research Award, the Irish Research Council (IRC) “New Foundations” Grant, the Royal Irish Academy (RIA) Charlemont Grant, the Natural Sciences and Engineering Research Council of Canada (NSERC) Postdoctoral Fellowship, Le Fonds de recherche du Québec – Nature et technologies (FRQNT) Fellowship (Research Fund of Quebec: Nature and Technologies), and the D.W. Ambridge Prize, awarded by McGill University for the most outstanding graduating doctoral student in Natural Sciences or Engineering.

Position: Ph.D. Student

Current Institution: University of Alberta

Abstract:

Complex networks represent the relationships or interactions between entities in a complex system, such as biological interactions between proteins and genes, hyperlinks between web pages, co-authorships between research scholars. Although drawn from a wide range of domains, real-world networks exhibit similar properties and evolution patterns. A fundamental property of these networks is their tendency to organize according to an underlying modular structure, commonly referred to as clustering or community structure. My graduate research focuses on comparing, quantifying, modeling, and utilizing this common structure in networks.

In the first part, I present generalizations of well-established traditional clustering criteria and propose proper adaptations to make them applicable in the context of networks. This includes generalizations and extensions of 1) the well-known clustering validity criteria that quantify the goodness of a single clustering; 2) clustering agreement measures that compare two clusterings of the same network. These adapted measures are useful in both defining and evaluating the communities in networks.

In the second part, I study generative network models and introduce an intuitive and flexible model for synthesizing modular networks that closely comply with the characteristics observed for real-world networks. The high degree of expressiveness and the realistic nature of our network generator makes it particularly useful for generating benchmark datasets with built-in modular structure.

In the last part, I investigate how the modular structure of networks can be utilized in different contexts. On one hand, I show the interplay between the attributes of nodes and their memberships in modules, and present how this interplay can be leveraged for predicting (missing) attribute values; where I propose a novel method for derivation of alternative modular structures that better align with a selected subset of attribute(s). On the other hand, I focus on an e-learning application and illustrate how the network modules can effectively outline the collaboration groups of students, as well as the topics of their discussions; and how this could be used to monitor and assess the participation trends of students throughout the course.

Bio:
Reihaneh Rabbany is a final-year Ph.D. candidate in the Computer Science Department at University of Alberta, Edmonton, Canada, and a member of Alberta Innovates Center for Machine Learning. She received her M.Sc. degree from the same department in 2010, and her B.Sc. degree in Software Engineering from Amirkabir University of Technology, Tehran, Iran in 2007. She was recognized for ranking first in her B.Sc graduating class and received the Computing Science GPA Award for her graduate studies in 2014. She is the recipient of several scholarships including Queen Elizabeth II Graduate Scholarship from University of Alberta in 2013-2015. Her research interests are data mining, machine learning, and applications in education including massive open online courses. Her Ph.D. research is focused on principled ways of comparing, quantifying, modeling, and utilizing the modular structure of real-world complex networks. She has published more than 15 peer-reviewed research papers and has served as a reviewer for several academic conferences and journals.

Position: Ph.D. Student

Current Institution: MIT

Abstract:
Energy-Efficient Circuits for Computational Photography on Mobile Devices

Computational photography encompasses a wide range of image capture and processing techniques, such as high dynamic range (HDR) imaging, low-light enhancement, image deblurring, panorama stitching and light field photography, that allow users to take photographs that can not be taken by a traditional digital camera. However, most of these techniques have high computational complexity, and existing software-based solutions do not achieve real-time performance and energy efficiency on mobile devices. This work proposes hardware accelerator-based implementations of these algorithms which achieve real-time performance. Additionally, the proposed implementations achieve over two orders of magnitude improvement in energy-efficiency making them suitable for integration into mobile devices.

The first part of this work focuses on deblurring of images degraded by camera shake blur. Removing this blur requires deconvolving the blurred image with a kernel which represents the trajectory of the camera during the exposure. This kernel is typically unknown and needs to be estimated from the blurred image. The estimation is computationally intensive and takes several minutes on a CPU which makes it unsuitable for mobile devices. This work presents the first hardware accelerator for kernel estimation for image deblurring applications. It achieves a 78x reduction in kernel estimation runtime, and a 56x reduction in total deblurring time for a FullHD 1920×1080 image, which enables quick feedback to the user. Configurability in kernel size and number of iterations gives up to 10x energy scalability, allowing the system to trade-off runtime with image quality. The test chip, fabricated in 40 nm CMOS, consumes 105 mJ for kernel estimation running at 83 MHz and 0.9 V, compared to 467J consumed by a CPU.

The second part of this work focuses on the design of a reconfigurable processor for bilateral filtering which is commonly used in computational photography applications. Specifically, the 40 nm CMOS test chip performs HDR imaging, low light enhancement and glare reduction while operating from 98 MHz at 0.9 V to 25 MHz at 0.9 V. It processes 13 megapixels per second while consuming just 17.8 mW at 98 MHz and 0.9 V, achieving significant energy reduction compared to previous CPU/GPU implementations.

These energy-scalable implementations pave the way for efficient integration of computationalphotography algorithms into mobile devices.

Bio:

Priyanka Raina received a B.Tech. degree in Electrical Engineering from the Indian Institute of Technology (IIT) Delhi in 2011 and an S.M. degree in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology (MIT) in 2013. She is currently a Ph.D. candidate in the Energy-Efficient Circuits and Systems group at MIT, working under the supervision of Prof. Anantha Chandrakasan. She was a graduate research intern at Intel Labs in the summer of 2013, working on the design of a hardware accelerator for real time video enhancement using multi-frame super-resolution. Ms. Raina has received several awards and honors including the Institute Silver Medal for the highest GPA in Electrical Engineering at IIT Delhi and a Gold Medal at the Indian National Chemistry Olympiad. Her research interests include design of energy-efficient circuits for computational photography, computer vision and machine learning applications.

Email

Website

Position: Graduate Research Staff

Current Institution: Purdue University

Abstract:
Thermophotovoltaic (TPV) devices convert heat into electricity via the photovoltaic (PV) effect. However, efficient energy conversion requires spectral shaping of thermal radiation to match the emitter’s photonic bandgap to the PV electronic bandgap. Moreover, the proportion of emitted heat intercepted by the PV diode, or the viewfactor, should be maximized. In this work, thermal emitters with spectral and directional selectivity are studied to boost the TPV heat-to-electricity conversion efficiency. First, near-ideal selective thermal emitters with built-in filters are proposed. Using Kirchhoff’s law of thermal radiation, the natural resonant emissivity of rare earth based emitters is enhanced by matching the radiative and absorptive quality factors, or by applying the Q-matching concept. Also a chirped multi-layer dielectric reflector is integrated to reject parasitic sub-bandgap emission. With this design, the spectral efficiency defined as the spectrum proportion absorbed by the PV diode can reach 80% and the theoretical heat-to-electricity conversion efficiency can reach 38%. Second, viewfactor enhancement between emitters and receivers is proposed by introducing spatially-dependent angle-restricted thermal emitters. This arrangement ensures high viewfactor without restricting the area ratio or the distance between the emitter and the receiver. In particular, sawtooth metallic gratings are shown to support asymmetric delocalized surface modes with asymmetric directional emission about the normal direction. Simulation results obtained by rigorous coupled wave analysis and finite difference time domain are used to verify and optimize the expected asymmetric thermal emission. This asymmetric behavior is distinguished for grating periods larger than the wavelength with emissivity peaks implied by the Q-matching condition.

Bio:
Enas Sakr is a PhD candidate in the school of electrical and computer engineering with a concentration in fields and optics at Purdue University. She received her B.Sc. and M.Sc. degrees in electronics and electrical communications both from Cairo University, Egypt, where she worked as a teaching and research assistant. Her research interests include modeling and simulation of nanophotonic structures, plasmonic structures and metasurfaces, with the theme of tailoring thermal emission for energy harvesting applications. During her M.Sc., Enas studied the effect of spatial modulation of periodic structures in the microwave regime and their potential applications for waveguide filtering. At Purdue university, her work focuses on efficient harvesting of waste heat as electricity using photonic and photovoltaic concepts, or thermophotovoltaics. She proposed an integrated filter rare-earth thermal emitter with near-ideal emission spectrum for improved thermophotovoltaic efficienies. She also developed asymmetric directional thermal emitters for directing and focusing heat from emitters to receivers. Enas has published in several journals, including Nanophotonics, Applied Physics letters, Journal of Applied Physics A and MRS advances. She also presented in international conferences including the electromagnetics Theory symposium (EMTS) and the SPIE photonics west. Enas worked as a teaching assistant for the online nanoHUB-U class: nanophotonic modeling on nanoHUB.org. She also served as a graduate mentor for undergraduate researchers funded by the network for computational nanotechnology (NCN) and the summer undergraduate research fellowship (SURF) program at Purdue University. Enas is the recepient of the Bilsland dissertation fellowship award in 2016 at Purdue university, and a recepient of the SPIE optics and photonics education scholarship in 2015. She was also awarded the best graduate mentor award in 2014 at Purdue University, and the young scientist award (YSA) in the electromagnetics theory symposium (EMTS) in 2010 in Berlin. Enas is an active member of the SPIE student chapter, and the nanotechnology student advisory council (NSAC) at Purdue University.

Position: Ph.D. Student

Current Institution: UC Berkeley

Abstract:
Refuting random constraint satisfaction problems below the spectral threshold

The study of average-case instances of constraint satisfaction problems, such as 3SAT, sits at the intersection of many fields, from computer science to physics and combinatorics. The complexity-theoretic characterization of random 3SAT instances has consequences for many areas in computer science, most notably for cryptography.

Uniformly random instances of 3SAT are known to be unsatisfiable with high probability when there at least 5N clauses. However, given a random 3SAT instance on N variables, the task of refuting the instance, or of proving that the instance has no satisfying assignments, is hypothesized to be computationally hard if there are O(N) clauses–in fact, the best known algorithms for refutation require instances with at least Ω(N^{3/2}) clauses, a factor of N^{1/2} beyond the unsatisfiability threshold at O(N).

In this talk, I will describe a new spectral algorithm that refutes 3SAT instances with fewer clauses, given more time. Specifically, our algorithm refutes instances with Θ(N^{3/2 – delta/2}) clauses in exp(O(N^delta)) time, giving the same guarantees as the best known polynomial-time algorithms when delta = 0, and from there smoothly approaching the unsatisfiability threshold at delta = 1. Further, our algorithm strongly refutes the instances, certifying that no assignment satisfies more than a (1-epsilon)-fraction of constraints for a constant epsilon.

Our algorithms also imply tight upper bounds on the value of sum-of-squares relaxations for random CSPs, as well as the value of sum-of-squares relaxations for random polynomial maximization over the unit sphere (injective tensor norm).

Based on joint work with Prasad Raghavendra and Satish Rao.

Bio:
Tselil Schramm is a 4th year PhD student in the UC Berkeley CS Theory group, with interests in spectral graph theory, spectral algorithms, approximation algorithms and semidefinite programming. She is advised by Prasad Raghavendra and Satish Rao, and supported by an NSF Graduate Research Fellowship and a UC Berkeley Chancellor’s Fellowship. She received her BS in Math/Computer Science at Harvey Mudd College. Her current research focus is understanding the performance of semidefinite programming hierarchies on average-case problems. Outside of research, she enjoys bouldering, hiking, backpacking, binge-watching shows on Netflix, and Yerba Mate tea.

Position: Ph.D. Candidate

Current Institution: Columbia University

Abstract:
Signal Generation for Emerging RF-to-Optical Applications

The need for clean and powerful signal generation is ubiquitous, with applications spanning the spectrum from RF to mm-Wave, to into and beyond the terahertz-gap. RF applications including mobile telephony and microprocessors have effectively harnessed mixed-signal integration in CMOS to realize robust on-chip signal sources calibrated against adverse ambient conditions. Combined with low cost and high yield, the CMOS component of hand-held devices costs a few cents per part per million parts. This low cost, and integrated digital processing, make CMOS an attractive option for applications like high-resolution imaging and ranging, and the emerging 5-G communication space. 5-G, with its push towards 100x end-user data rates, is expected to need very clean sources that enable transmission of dense modulation on powerful mm-Wave carriers. RF-based ranging (RADAR) techniques when expanded to mmWave and even optical frequencies can enable centimeters to micrometers of resolution, which prove useful in navigation systems and 3D imaging respectively. These applications, however, impose 10x to 100x more exacting specifications on power and spectral purity at much higher frequencies than conventional RF synthesizers.

We investigate the challenges with generating high-frequency, high-power and low phase-noise signals in CMOS, and discuss three novel prototypes to overcome the limiting factors in each case. We augment the traditional maximum oscillation frequency metric (fmax, typically 200-300 GHz in CMOS), which only accounts for transistor losses, with passive component loss to derive an effective fmax metric. We then present a methodology for building oscillators at this fmax. Next, we explore generating large signals beyond fmax through harmonic extraction. Applying concepts of waveform shaping, we propose a power mixer that engineers transistor nonlinearity to maximize power generation at a specific harmonic. Lastly, we demonstrate an all-passive, ultra-low noise phase-locked loop (PLL). In conventional PLLs, a noisy buffer converts the slow, low-noise sine-wave reference signal to a jittery square-wave clock against which the phase error of a noisy voltage-controlled oscillator (VCO) is corrected. We eliminate this reference buffer, and measure phase error by sampling the reference sine-wave with the 50x faster VCO waveform already available on chip. By avoiding noisy acceleration of slow waveforms, and directly using voltage-mode phase error to control the VCO, we realize a low-noise completely passive controlling loop.

We conclude with ongoing work that brings together these concepts developed for clean, high-power signal generation towards a hybrid CMOS-Optical approach to Frequency-Modulated Continuous-Wave (FMCW) Light-Detection-And-Ranging (LIDAR). FMCW techniques with optical imagers can enable micrometers of resolution. However, cost-effective tunable optical sources are temperature-sensitive and have nonlinear tuning profiles, rendering precise frequency modulations or ‘chirps’ untenable. Locking them to an electronic reference through an electro-optic PLL, and electronically calibrating the control signal for nonlinearity and ambient sensitivity, can make such chirps possible. To avoid high-cost modular implementations, we seek to leverage the twin advantages of CMOS – intensive integration and low-cost high yield – towards developing a single-chip solution that uses on-chip signal processing and phased arrays to generate precise and robust chirps for an electronically-steerable LIDAR beam.

Bio:

Jahnavi Sharma is a doctoral student working with Dr. Harish Krishnaswamy at the Department of Electrical Engineering at Columbia University. Her research interests include developing integrated circuit solutions for emerging applications, pushing performance through both system- and block- level innovation in CMOS and compound semiconductors. This also encompasses specific interests in high- to sub-mmWave circuit design, device modeling for high-frequency design, and mixed-signal circuit techniques. Her doctoral work focuses on signal generation for RF-to-Optical applications, and she was the recipient of the 2015-16 IBM PhD Fellowship Award.

She has also held internship positions at IBM in 2014 where she worked on new low-loss bipolar switch topologies for widely-tunable mm-Wave oscillators in Silicon-Germanium, and at Alcatel-Lucent in 2013 and 2015 where she worked on designing cheap hybrid RF-mmWave modules for LTE fronthaul to improve network accessibility and higher data rates without expensive fiber-based backhaul.

Prior to her PhD, Jahnavi graduated with a bachelor’s and master’s degree in Electrical Engineering from the Indian Institute of Technology, Madras in 2009. At IIT Madras, her master’s thesis with Dr. Shanthi Pawan was on fast simulation of Continuous Time Delta-Sigma modulators for analog-to-digital data conversion.

Position: Ph.D. Candidate

Current Institution: USC

Abstract:
Robot as Moderator: Managing Multi-Party Social Dynamics for Socially Assistive Robotics

One approach to multi-party socially assistive robotics involves a robot engaging in moderation, defined in this work as the task of controlling or directing a group interaction. Formally, we define moderation as the process by which a goal-directed multi-party interaction is regulated via assignment of interaction resources, including both physical resources, such as objects or tools, and social resources, such as the conversational floor or participants’ attention. A moderator is an agent whose primary role is to engage in behaviors that moderate the group interaction. In this work, we present a domain-independent computational model of moderation for multi-party human-machine interactions that enables a robot or virtual agent to act as a moderator in a group interaction. Although this work is generally applicable to embodied social agents, the focus of the evaluation is on a robot as the moderator, leveraging results from human-robot interaction (HRI) suggesting that physically embodied agents are more effective than virtual agents at affecting human behavior, as well as domain-specific research indicating that physically embodied interactions have benefits for task performance.

Moderation is formalized as a decision-making problem with two types of goals: task goals, that relate to the main purpose of the interaction, and social goals, that govern how the interaction should proceed. This approach allows the development of robot control algorithms within this modeling framework, grounded in prior work in robotics, artificial intelligence, and most importantly symbolic planning, where a system uses symbolic reasoning to select a series of actions to achieve a goal, and probabilistic reasoning and planning, where uncertainty in sensing and actuation are taken into account when choosing agent behaviors. The algorithms are parameterized and tested using real-world human robot interaction. This work focuses on interactions with multiple goals: task goals relate to what the group is trying to accomplish, and social goals define how the task should be accomplished.

Because this work focuses on moderation in goal-directed interactions, the model is evaluated in the domain of socially assistive robotics (SAR). SAR is an area of research in which robots are deployed and studied in contexts where they leverage social interaction to enable users to achieve some challenging goal, typically in health-related or educational contexts. This area of research provides rich opportunities for goal-directed multi-party interactions, with a variety of ways in which interactions might be moderated. The approach is deployed and evaluated in a multi-party assembly game, group storytelling, and a intergenerational family-based interaction including older adults, related adults, and related children (in most cases, groups consist of grandparents, parents, and children). We find that a robot moderator whose control is based on our model is accepted into these groups and can have a positive effect on interactions, including fostering positive socialization, improving group cohesion and increasing speech.

Bio:

Elaine Short is a sixth-year PhD candidate at the Interaction Lab, run by Prof. Maja Matarić in the Department of Computer Science at the University of Southern California (USC). She received her MS in Computer Science from USC in 2012 and her BS in Computer Science from Yale University in 2010. Elaine is a recipient of a National Science Foundation Graduate Research Fellowship, USC Provost’s Fellowship, and a Google Anita Borg Scholarship. At USC, she has been recognized for excellence in research, service, and teaching: she was awarded the Viterbi School of Engineering Merit Award and the Women in Science and Engineering (WiSE) Merit Award for Current Doctoral Students, as well as the Service Award, Best Teaching Assistant Award, and the Best Research Assistant Award from the Department of Computer Science. At Yale she was the recipient of the Saybrook College Mary Casner Prize. Throughout her career, she has been involved in efforts to recruit and retain women and underrepresented minorities in the field of computer science, including supervising undergraduate and high-school level research assistants and co-founding a social group for women PhD students in computing.

Elaine’s research is in the growing field of Socially Assistive Robotics, which develops robots that help people achieve health-, wellness-, and education-related goals through human-robot interactions. Her research focuses on applying the principles of Socially Assistive Robotics to multi-party interactions by developing algorithms and approaches that enable autonomous robots to moderate multi-party interactions. By treating both social and physical aspects of the interaction as resources that the robot moderator must allocate, her work provides a framework for developing control algorithms in diverse domains that allow a robot moderator to assist groups of people in achieving both task-related goals and goals related to the dynamics of the interaction and roles of the participants. Elaine has worked with diverse end-user populations, from children to older adults, on a variety of research projects. She has developed a semi-autonomous socially assistive robotic system to teach first grade students about nutrition and performed data analysis exploring the role of agency for a socially assistive robot in open-ended interactions with children with autism. She has also co-developed a robot to coach older adults in the chronic phase of post-stroke rehabilitation through a button-pressing rehabilitation task. Elaine is currently working on modeling inter-generational interactions between older adults and both adult and child family members, and developing algorithms for supporting group cohesion while assisting a group in an assembly task. This work has included collaborations with researchers across disciplines and institutions, notably as part of the large, multi-institution Robots for Kids NSF Expeditions in Computing project. She plans to defend her PhD dissertation, “Robot as Moderator: Managing Multi-Party Social Dynamics for Socially Assistive Robotics”, in spring of 2017.

Position: PhD. Student

Current Institution: University of California, Berkeley

Abstract:
CoCoA: A Framework for Communication-Efficient Distributed Optimization

Communication remains the most significant bottleneck in the performance of distributed optimization algorithms for large-scale machine learning. In light of this, we propose a communication-efficient framework, CoCoA, that uses local computation in a primal-dual setting to dramatically reduce the amount of necessary communication. We provide a strong convergence rate analysis for this class of algorithms, as well as experiments on real-world distributed datasets with implementations in Apache Spark. We demonstrate the flexibility and empirical performance of CoCoA as compared to state-of-the-art distributed methods, for common objectives such as SVMs, ridge regression, Lasso, sparse logistic regression, and elastic net-regularized objectives.

Bio:

Virginia Smith is a 5th-year Ph.D. student in the EECS Department at UC Berkeley, where she works jointly with Michael I. Jordan and David Culler as a member of the AMPLab. Her research interests are in large-scale machine learning and distributed optimization. She is actively working to increase the presence of women in computer science, most recently by co-founding the Women in Technology Leadership Round Table (WiT). Virginia has won several awards and fellowships while at Berkeley, including the NSF fellowship, Google Anita Borg Memorial Scholarship, NDSEG fellowship, and Tong Leong Lim Pre-Doctoral Prize.

Email

Website

Position: Postdoctoral Researcher

Current Institution: University of California, Berkeley

Abstract:
Material and Device Innovations for Energy Applications

The energy landscape has significantly changed within the last few years with photovoltaics reaching grid parity and becoming competitive with traditional sources of energy in many parts of the world. However, further improvements in the overall economics remain key to continue and accelerate this transition towards a sustainable energy production. The most commonly used metric for energy cost is $/Watt. Consequently, either efficiency improvement or manufacturing/material cost reduction can lead to an overall cost drop. To achieve this, novel materials, new fabrication process schemes, and innovative device concepts and architectures are needed.

Thin film solar cells represent one route of cost reduction by using direct band gap materials, that absorb light efficiently within 1-3 um, reducing the thickness requirement to a hundredth of a Si cell. Additionally, such thin film materials can be deposited on flexible substrates opening a new market for flexible modules. I will present my work on new thin film solar cell devices based on a spectrum of different material systems, each presenting unique opportunities and challenges. Specifically, I worked on three material classes and device architectures: First, chalcopyrite based semiconductors for thin film solar cells using earth abundant elements. The focus here is on low cost solution processing as well as the influence of sodium on electrical performance and grain growth. Second, innovative growth processes that make better use of precursor elements can lead to a cost reduction. I will show a novel thin film vapor-liquid-solid (TF VLS) growth platform to process high quality III-V semiconductors for the application in electronic devices and solar cells. The TF VLS platform enables the growth of any desired shape onto non-epitaxial substrates as well as the simultaneous growth and doping of the material which makes it highly versatile for novel device applications. Third, another approach to lower the cost of solar electricity is to increase the solar conversion efficiency. The design of multijunction solar cells presents a promising route to exceed the theoretical Shockley Queisser limit of ~33% for single junction photovoltaic devices. The efficiency of the traditionally well-established Si technology can be significantly raised by stacking a wide gap top cell onto the smaller gap Si bottom cell, thus making better use of the solar spectrum enabling conversion efficiencies > 40%. In this context, the hybrid, organic-inorganic lead halide perovskites are very attractive due to their ease in processing with low-cost equipment and high conversion efficiencies. My contribution to this field is the investigation of the optoelectronic properties of tunable wide band gap lead halide perovskites demonstrating high material quality over the full band gap range from 1.6 – 2.3 eV. This makes the novel hybrid material a highly promising candidate for application in lasers, LEDs, transistors and solar cells.

Bio:
Carolin M. Sutter-Fella received her Ph.D. from ETH Zürich, Switzerland, where she worked in Prof. Ayodhya N. Tiwari’s laboratory for Thin Films and Photovoltaics. Currently, she is a postdoctoral researcher in Prof. Ali Javey’s group in the Electrical Engineering and Computer Science Department, UC Berkeley. Carolin was awarded a Swiss National Science Foundation Fellowship (2015-2017). Her research is centered around synthesis, characterization and functionalization of inorganic and hybrid organic-inorganic semiconductor materials for energy applications. One of her main interests lies in new photovoltaic materials and devices, with the ultimate goal to make solar power the dominant source of energy. To tackle this challenge, she is working on two objectives – reducing the cost and increasing the conversion efficiency of solar cells. Carolin explores new concepts at the interface of materials engineering and device innovation to enable new applications.

Position: Ph.D. Candidate

Current Institution: George Washington University

Abstract:

Convergence rate of stochastic k-means

Clustering, the science (or art) of grouping data automatically, is crucial to many applications, ranging from grouping genes with similar expressions to segmenting forest cover types obtained by satellite images. Modern clustering algorithms face challenges from large-scale datasets. For this purpose, stochastic k-means, a scalable version of the k-means algorithm, was proposed (Bottou-Bengio 98, Sculley 10) and has become increasingly popular for general-purpose large-scale clustering. On the other hand, most of its properties, such as convergence speed and solution quality, lacked formal guarantees, limiting its reliability to practitioners.

I will present our recent results addressing this gap between theory and practice. By exploiting the connection between stochastic k-means and stochastic gradient descent and using recent insights on non-convex stochastic optimization, we show, for the first time, that starting with any initial choice of k centers, the algorithm converges to a local optimum at a linear rate, in expectation. In addition, we show that if the dataset has an underlying “clusterable” structure, then initializing stochastic k-means with a simple and scalable seeding algorithm guarantees expected convergence to an optimal k-means solution at linear rate, with high probability.

Bio:
Cheng is a PhD candidate at George Washington University, working on machine learning with Professor Claire Monteleoni. Her recent research directions include rigorously justifying the empirical success of popular clustering heuristics, such as k-means and linkage-based algorithms, developing scalable clustering algorithms via sampling, and the analysis of non-convex problems emerging from unsupervised learning. On the applied side, she has used machine learning algorithm to detect patterns of climate extremes from raw climate data. Cheng received her B.S. in Mathematics also from George Washington University in May 2012. In the preceding fall, she returned from studying abroad and serendipitously took Claire’s class in machine learning, which turned out to be a perfectly interdisciplinary field that accommodates her diverse interests, and now she is entering her 5th year exploring it. She was the recipient of the 2013 Louis P. Wagman Endowment Fellowship and the 2015-2016 Engineer Alumni Association scholarship from the GWU School of Engineering and Applied Science.

Email

Website

Position: Ph.D. Candidate

Current Institution: University of Maryland, College Park

Abstract:
Ideal Ciphers: A Closer Look

Block ciphers are an important building block in many cryptographic constructions. Such constructions are often designed and analyzed in an idealized framework called the ideal-cipher model where the underlying primitive, the block cipher, is modeled as the ideal object, the ideal cipher. (Informally, an ideal cipher is an oracle that takes a key such that each key to the ideal cipher defines an independent, random permutation.)

In this work, I focus on the ideal cipher model and address the following questions: (1) how to construct an ideal cipher and (2) how to reason about security of a block cipher-based cryptographic construction that is instantiated with a “defective” ideal cipher. I discuss approaches to solving these two problems and propose research directions aimed at understanding the security provided by current approaches to block cipher designs.

Bio:
Aishwarya Thiruvengadam is a Ph.D. candidate in the Department of Computer Science at the University of Maryland where she is advised by Prof. Jonathan Katz. Her primary interests lie in cryptography and recently, she has been interested in the study and analysis of primitives used in
symmetric-key cryptography.

Position: Ph.D. Candidate

Current Institution: Carnegie Mellon University

Abstract:
Protecting User Privacy for Modern and Emerging Platforms

The evolution of apps on new platforms such as mobile, web and Internet of Things are bringing more functionality and convenience for people; however, these new platforms also expose users to security and privacy risks. Researchers and developers are spending much efforts to protect the users, but unauthorized information leakage is still rampant, especially when new features or new techniques are introduced. This is because it is usually difficult to design new features securely from the beginning. Information leakage is dangerous and urgent to be resolved, especially for the cases where multiple parties are involved (which is more complicated to coordinate among different parties). The fundamental problems of information leakage for these new platforms usually are three folds: (1) unclear security implications in protocol design; (2) implementation errors due to misunderstood specifications; (3) ignored or misunderstood usability aspects of security-critical interfaces. To resolve these problems, I first try to understand the security implications and vulnerabilities of the apps and then design practical and usable information sharing and data protection policies.

In this talk, I’ll present selected projects to discover and measure privacy risks, as well as design and implement privacy schemes for modern and emerging platforms. First, in the protocol level, I did a security analysis for HTML5 design and identify issues that break the foundation of browser security policy. I proposed a defense to fix the vulnerabilities that leak large scale of user privacy. Second, in the implementation level, I performed program analysis to discover problems of current permission systems in third-party apps on social networks and Internet of Things. With the insights from the program analysis, I propose principles to design a privacy preserving permission system to share least privilege information to third-party apps without affecting their functionality. Third, in the user interaction level, I design a crowdsourcing-based privacy notification scheme for mobile updates, which nudge users to pay attention to the notification and make privacy preserving decisions. In general, I hope to bring the low-level privacy enhancements to the users through neat design, efficient implementation, and usable
interface.

Bio:
Yuan is a Ph.D candidate at Carnegie Mellon University. Her research interests involve security and privacy and its interactions with system, networking, and human-computer interaction. Her current research focuses on developing new technologies for protecting user privacy, particularly in the areas of mobile systems and Internet of Things. Her previous work about mobile and web security and privacy have been adopted by Google (Chrome HTML5 privacy), Facebook (flaw analysis for web services, authentication protection), Microsoft (login protection), Samsung (mobile app security), Evernote (OAuth security), Dropbox (OAuth security), and others. She interned at Microsoft Research, Facebook, and Samsung Research. She served as a volunteer for CMU Privacy Day and presented talks to undergraduate student clubs about cyber security. She was awarded as Black Hat Future Female Leaders. She was a recipient of IBM Fellowship and in the final list of Microsoft Research Fellowship and Qualcomm Innovation Fellowship.

Position: Ph.D. Candidate

Current Institution: University of California, Berkeley

Abstract:
Polymer Nanocomposite Dielectric Materials for Energy Storage Applications

Materials with high dielectric constants have drawn increasing interests in recent years for their important applications in capacitors, actuators, and high energy density pulsed-power. Particularly, polymer-based dielectrics, owing to their properties like high electric breakdown field, low dielectric loss, flexibility and easy processing are excellent candidates. In order to enhance the dielectric constant of polymer materials, high dielectric constant fillers materials are added to the polymer. Typically, the dispersion of nanoparticles in polymer matrices is problematic and the nanoparticles tend to phase separate or aggregate in the polymer matrix.

We propose the use of metal nanoparticle fillers to enhance the dielectric properties of the base polymer while minimizing dielectric loss by preventing nanoparticle agglomeration. Novel combinations of materials, which use 5 nm diameter metal nanoparticles embedded inside high breakdown strength polymer materials are evaluated. High breakdown strength polymer materials are chosen to allow further exploration of these materials for energy storage applications. The focus is on obtaining a uniform dispersion of nanoparticles with no agglomeration by utilizing appropriate ligands/surface functionalizations on the gold nanoparticle surface. Use of ligand coated metal nanoparticles will enhance the dielectric constant while minimizing dielectric loss, even with the particles closely packed in the polymer matrix.

The developed nanocomposite system consists of polyvinylpyrrolidone (PVP) functionalized gold nanoparticles embedded inside a polvinylidene fluoride (PVDF) polymer matrix. A homogeneous dispersion of gold nanoparticles with low particle agglomeration has been achieved upto 15 wt% of nanoparticles. Dielectric characterization of the nanocomposite material with 10 wt% nanoparticle content showed a 2x enhancement in the dielectric constant over the base polymer and low dielectric loss values were observed. A photodefinable nanocomposite dielectric is also developed using the SU-8 polymer.

Bio:
I am a Ph.D. candidate majoring in Nanotechnology at University of California, Berkeley. I am advised by Prof. Albert Pisano and Prof. Tarek Zohdi. I also collaborate closely with Prof. Thomas Russell, Lawrence Berkeley National Lab. My research interests include dielectrics, nanocomposites, self-assembly of nanomaterials, energy storage, and conversion.

The focus of my work has been on developing advanced functional materials for applications in the field of energy storage. I have worked on the development of polymer nanocomposites based solid-state dielectric materials. I am currently working on the generation of structured fluids that would enable novel applications such as an all-liquid battery which offers very high ion transport and low impedance.

During graduate training, I got the opportunity of student interaction through teaching and research advising. I have taught and mentored students in my role as Teaching Assistant for two undergraduate courses at Berkeley. I have been proactive in managing extra-curricular events and serving as a liaison for several graduate student events. I have run a weekly Nanotechnology colloquium event for the past three years that hosts speakers from both academic backgrounds and industrial labs. I am also a member of EECS graduate women association (WICSE).

I completed MS in EECS, UC Berkeley in 2015 and B.E. in Manufacturing Processes and Automation Engineering in 2010 from University of Delhi, India. During undergraduate study, I worked on a variety of projects, involving design, fabrication and mathematical analysis of automated material handling systems. I also worked at Bharat Heavy Electricals Limited (BHEL), India on the design of fire protection system layouts for thermal power plants from 2010-2011.

Position: Ph.D.

Current Institution: University of California, San Diego

Abstract:
Code deficiencies and bugs constitute an unavoidable part of software systems. In safety-critical systems, like aircrafts or medical equipment, even a single bug can lead to catastrophic impacts such as injuries or death. Formal verification can be used to statically track code deficiencies by proving or disproving correctness properties of a system. However, at its current state formal verification is a cumbersome process that is rarely used by mainstream developers.

During my research we developed LiquidHaskell, a usable formal verifier for Haskell programs. LiquidHaskell naturally integrates the specification of correctness properties in the development process. Moreover, verification is automatic, requiring no explicit proofs or complicated annotations. At the same time, the specification language is expressive and modular, allowing the user to specify correctness properties ranging from totality and termination to memory safety and safe resource (e.g., file) manipulation. Finally, LiquidHaskell has been used to verify more than 10,000 lines of real-world Haskell programs.

LiquidHaskell serves as a prototype verifier in a future where formal techniques will be used to facilitate, instead of hinder, software development. For instance, by automatically providing instant feedback, a verifier will allow a web security developer to immediately identify potential code vulnerabilities.

Bio:
Niki Vazou is a Ph.D. candidate at University of California, San Diego, supervised by Ranjit Jhala. She works in the area of programming languages, with the goal of building usable program verifiers that will naturally integrate formal verification techniques into the mainstream software development chain. Niki Vazou received the Microsoft Research Ph.D. fellowship in 2014 and her BS from National Technical University of Athens, Greece in 2011.

Position: Ph.D. Candidate

Current Institution: University of California, Berkeley

Abstract:
Resource-efficient and high-performance big data systems via a coding-theoretic approach

The data revolution is strongly influencing almost all walks of human endeavor today. This paradigm shift is enabled by the so-called big data systems which make it possible to store and analyze massive amounts of data. The foundation of any big data system is a large-scale, distributed, data storage system that typically comprises thousands of interconnected servers. At such massive scales of operation, failures and operational glitches are the norm rather than the exception, making it imperative to store data redundantly. Most systems today use a strategy termed replication, storing multiple copies of the data on different servers. However, the amount of data to be stored is growing at an unprecedented rate, far surpassing Moore’s law. As a result, replication is quickly becoming economically infeasible. Coding theory offers a compelling alternative, that is storage space optimal, using erasure codes. For this reason, many storage systems are starting to deploy coding instead of replication. While traditional codes are optimal with respect to storage space utilization, the initial adopters of this technology have shown that these codes severely burden other system resources such as I/O, network bandwidth and CPU. Furthermore, coding is being introduced into big data systems as an afterthought leading to a fundamental disconnect in the design of these systems and the capabilities that coding offers. My research aims at addressing these challenges both by constructing new codes as well as by designing systems that can leverage the full potential of codes.

My approach towards this goal is two fold, spanning the areas of both theory and systems. In the first part of my talk, I will present new classes of codes that we have constructed for providing fault tolerance while addressing specific challenges that arise in big data systems. These codes significantly improve on traditional codes in terms of various critical system resources such as I/O, network bandwidth and CPU, while also being storage space optimal. An implementation of one of our new codes on the Facebook’s data warehouse cluster in production has shown significant benefits over the state-of-the-art. This new code is also being incorporated into the Apache Hadoop Distributed File System which is the most widely used file system in big data systems today. In the second part of my talk, I will present our work on identifying new ways in which coding can be employed to drive performance improvements in big data systems. Currently, big data systems employ coding as a resource-efficient alternative to replication for achieving fault tolerance. However, coding offers a broad range of properties and trade-offs that are fundamentally different from replication. These properties can be exploited to achieve performance improvements, beyond fault tolerance, in ways that are not possible under replication. As a step in this direction, we have recently designed and built a caching system for data-intensive clusters that exploits properties of coding to provide significant improvements in load balancing and read/write latencies. Overall, I envision coding-theoretic principles to enrich big data systems in a multitude of ways, and I plan to enable this through both fundamental theoretical as well as systems research.

Bio:
Rashmi K. Vinayak is a PhD candidate in the Electrical Engineering and Computer Science department at UC Berkeley. Her research interests lie in the theoretical and system challenges that arise in storage and analysis of big data. Rashmi’s dissertation research focuses on achieving significantly better performance and resource efficiency in big data systems using coding-theoretic principles, specifically involving designing new codes for distributed storage systems as well as building systems that employ these new coding techniques in novel ways. Rashmi’s research has been recognized by the IEEE Data Storage Best Paper and Best Student Paper Awards for 2011 and 2012, and the Eli Jury Award 2016 which is the best dissertation award from the UC Berkeley EECS department presented for outstanding achievement in the area of Systems, Communications, Control, or Signal Processing. Rashmi is also a recipient of the Facebook Fellowship 2012-13, Microsoft Research PhD Fellowship 2013-15, and the Google Anita Borg Memorial Scholarship 2015-16.

Position: Ph.D. Student

Current Institution: Carnegie Mellon University

Abstract:
Probabilistic Bounded Delta-Reachability Analysis for Stochastic Hybrid Systems

We consider probabilistic bounded reachability problems for two classes of models of stochastic hybrid systems. The first one is (nonlinear) hybrid automata with parametric uncertainty. The second one is probabilistic hybrid automata with additional randomness for both transition probabilities and variable resets. Standard approaches to reachability problems for linear hybrid systems require numerical solutions for large optimization problems, and become infeasible for systems involving both nonlinear dynamics over the reals and stochasticity. Our method encodes stochastic information by using a set of introduced random variables, and combines delta-complete decision procedures and statistical tests to solve delta-reachability problems in a sound manner, i.e., it always decides correctly if, for a given assignment to all random variables, the system actually reaches the unsafe region. Compared to standard simulation-based methods, it supports non-deterministic branching, increases the coverage of simulation, and avoids the zero-crossing problem. We demonstrate its applicability by discussing three representative biological models and additional benchmarks for nonlinear hybrid systems with multiple probabilistic system parameters.

Bio:
I am a final year Ph.D. student under the supervision of Prof. Edmund M. Clarke (Turing Award 2007) at Computer Science Department, Carnegie Mellon University. Before this, I received a B.A. degree in computer science from Wuhan University, China in 2006, and a M.A. degree in computer science from Institute of Software, Chinese Academy of Sciences, China. I was an intern at Microsoft Research Cambridge with Dr. Jasmin Fisher in Fall 2011. I was awarded a Richard King Mellon Foundation Presidential Fellowship in the Life Sciences by Carnegie Mellon University from year 2015 to 2016. My research focuses on formal specification and verification of software and hardware, especially reachability analysis of stochastic hybrid systems. I am also interested in developing formal models, modeling languages, and algorithms that address problems of practical biological and medical concerns.

Position: Graduate Research Associate

Current Institution: University of Illinois at Urbana-Champaign

Abstract:
The Value of Privacy: Strategic Data Subjects, Incentive Mechanisms and Fundamental Limits

We study the value of data privacy in a game-theoretic model of trading private data, where a data collector purchases private data from strategic data subjects (individuals) through an incentive mechanism. The private data of each individual represents her knowledge about an underlying state, which is the information that the data collector desires to learn. Different from most of the existing work on privacy-aware surveys, our model does not assume the data collector to be trustworthy. Then, an individual takes full control of its own data privacy and reports only a privacy-preserving version of her data.

In this paper, the value of epsilon units of privacy is measured by the minimum payment of all nonnegative payment mechanisms, under which an individual’s best response at a Nash equilibrium is to report the data with a privacy level of epsilon. The higher epsilon is, the less private the reported data is. We derive lower and upper bounds on the value of privacy which are asymptotically tight as the number of data subjects becomes large. Specifically, the lower bound assures that it is impossible to use less amount of payment to buy epsilon units of privacy, and the upper bound is given by an achievable payment mechanism that we designed. Based on these fundamental limits, we further derive lower and upper bounds on the minimum total payment for the data collector to achieve a given learning accuracy target, and show that the total payment of the designed mechanism is at most one individual’s payment away from the minimum.

Bio:
Weina Wang received her B.E. degree in Electronic Engineering from Tsinghua University,Beijing, China, in 2009. She is currently pursuing a Ph.D. degree in the School of Electrical,Computer and Energy Engineering at Arizona State University, Tempe, AZ. Her research interests include resource allocation in stochastic networks, data privacy and game theory. She won the Joseph A. Barkson Fellowship for the 201516 academic year and the UniversityGraduate Fellowship, both from Arizona State University.

Position: Postdoctoral Research Associate

Current Institution: University of Washington

Abstract:
Dynamic Metamaterial Antennas for Novel Microwave Imaging

Metamaterials, designer electromagnetic materials comprised of subwavelength elements, can manipulate electromagnetic radiation into spatially- and frequency-diverse patterns. In addition, dynamic metamaterials can be tuned with an external stimuli, allowing their characteristics to change in real time. Dynamic metamaterial antennas (MMAs) have emerged as a viable option in radar imaging systems. Synthetic aperture radar (SAR) utilizes a moving antenna platform to increase resolution over standard radar without the hardware burdens of a large aperture. Beam steering methods can further increase resolution. On current SAR systems, beam steering is generally achieved through a mechanically gimbaled antenna or a phased array of antennas. While phased arrays overcome the bulky nature of a mechanical gimbal, their cost, power draw, weight, and complexity are problematic for systems such as spaceborne SAR imagers. Dynamic metamaterial antennas can achieve the beam steering necessary for increased resolution while maintaining a fairly low-cost, lightweight, low-power design. Here we have demonstrated SAR imaging in the X-band using a commercially available dynamic metamaterial antenna.

Metamaterial antennas also have the ability to go beyond standard SAR imaging using their diverse beam patterns and adaptive beamforming. A potential application is aberration correction in microwave imaging systems. For example, a dynamic metamaterial could act as a feed for a large parabolic reflector in a spaceborne imaging system. Defects that may arise in the reflector due to thermal fluctuations, mechanical stress, or other system errors can be corrected for as they arise by optimizing the MMA beam pattern, similar to the technique of adaptive optics. We have shown a proof-of-concept demonstration of aberration correction in a SAR imaging system by steering the MMA beam to avoid a crack in a reflector in the imaging system.

In addition, MMAs could utilize their spatially- and frequency-diverse output beams to increase resolution over standard SAR imaging approaches. In traditional SAR methods there is a trade-off between high resolution using a small aperture and the signal-to-noise ratio decrease incurred by the resulting large beamwidth. By sampling the scene with several smaller sub-beams, orthogonal in either frequency or time, both resolution and signal-to-noise ratio can be enhanced over standard SAR imaging methods while increasing the total image size. We have implemented this enhanced resolution stripmap mode SAR in our laboratory system and have indeed seen improved resolution and larger imaging area over standard SAR methods.

While our experimental demonstrations have been proof-of-concept to this point, we believe they lay the groundwork for future microwave imaging systems using novel imaging modalities leveraged by dynamic metamaterial antennas.

Bio:
Claire Watts is currently a post-doctoral researcher under Prof. Matt Reynolds in the University of Washington Electrical Engineering Department. Claire received her B.A.S. in physics from Colgate University in 2010 and her Ph.D. in physics from Boston College in 2015. Her thesis work focused on novel ways to use metamaterials in millimeter-wave, THz, and infrared imaging systems. She wrote a first author publication on THz imaging with metamaterials that was published in Nature Photonics and featured in their “News and Views” section. She published a first author review article on metamaterial electromagnetic wave absorbers that was featured on the cover of Advanced Optical Materials and currently has over 300 citations on Google Scholar. She gave an invited presentation at the Terahertz: Opportunities for Industry Workshop in Lausanne, Switzerland in February, 2015.

At UW, Claire continues to image with metamaterial devices and has successfully set up a laboratory X-band synthetic aperture radar (SAR) imaging system using a dynamic metamaterial antenna. Here, she implemented novel imaging techniques to increase resolution over standard SAR imaging leveraged by the beam steering capabilities of the metamaterial antenna.

Position: Doctoral Student

Current Institution: UC Berkeley

Abstract:
Accelerated gradient methods play a central role in optimization, achieving optimal rates in
many settings. While many generalizations and extensions of Nesterov’s original acceleration
method have been proposed, it is not yet clear what is the natural scope of the acceleration concept. In this work, we study accelerated methods from a continuous-time perspective. We show that there is a Lagrangian functional that we call the Bregman Lagrangian which generates a large class of accelerated methods in continuous time, including (but not limited to) accelerated gradient descent, its non-Euclidean extension, and accelerated higher-order gradient methods. We show that the continuous-time limit of all of these methods correspond to traveling the same curve in spacetime at different speeds. From this perspective, Nesterov’s technique and many of its generalizations can be viewed as a systematic way to go from the continuous-time curves generated by the Bregman Lagrangian to a family of discrete-time accelerated algorithms.

Bio:
I am a fourth year doctoral student at UC Berkeley working with Michael Jordan and Benjamin Recht. I am broadly interested in applied math, dynamical systems, and optimization, and I am currently a member of the Statistical AI Lab and the AMPLab at Berkeley. Before starting graduate school, I graduated from Harvard University in 2011 where I got my bachelors in applied mathematics and philosophy. During my year off, I worked with Professor Cynthia Rudin at MIT in the Predictions Analysis Lab.

Position: Ph.D. Student

Current Institution: University of California, Berkeley

Abstract:
Carpooling: a step towards seamless urban transportation

Traffic touches on the lives of billions of people on this planet, and in the US it costs us 50 million hours annually in commuting and 1.9 billion gallons of gas in congestion. A sizable 30% of the traffic in cities is simply searching for parking. Traffic is all at once a technical problem, a political problem, a deeply personal problem — and these aspects must be studied and addressed together. To this end, we must devise solutions that 1) are technically feasible, 2) are politically implementable in a reasonable time-frame, and 3) people will actually use.

Motivated by the problem of insufficient infrastructure for current and growing traffic demands, we studied carpooling, a long-hailed solution to better network utilization and yet still accounts for less than 10% of trips. We sought to understand the failure modes of carpooling, and found through an analysis of 200 employees of a large corporation that time is by far the most important factor in mobility preferences — perhaps enough so to overcome social, psychological, and cultural barriers to carpooling. We therefore formulated a carpooling problem in which all users, riders and drivers alike, can potentially benefit from carpooling through time savings. Specifically, we focused on the carpooling incentive structure of high-occupancy vehicle lanes (HOV), a politically well-established and easily implemented mechanism for traffic control. When the HOV lanes are restricted to three or more occupants (HOV3), which is necessary to maintain the high throughput of the lanes, we show that the corresponding set partitioning optimization problem of optimally assigning users to carpool groups is NP-Complete. Accordingly, prior work on related problems has not been able to scale beyond 1000 users. We therefore study a relaxed version of the problem which allows up to three passengers, and we demonstrate that a sampling-based local search method, which approximates the solution, enables us to scale to 100K users. By carefully considering the technical, political, and personal aspects of the carpool problem from the start, our results are ready to be field tested.

By overcoming social barriers and optimizing for the use of public infrastructure, this work paves the way for coordinating users for even higher throughput. I conclude with two (still hypothetical) examples: 1) automatically learning and introducing ad-hoc bus routes, which can carry up to 80 passengers instead of three, and 2) coordinating individual cars with existing high-throughput public transit systems (buses, subway, train) for a seamless transportation experience.

Bio:
Cathy Wu received a Master of Engineering degree in Electrical Engineering and Computer Science (EECS) (2013) and a Bachelor of Science degree in EECS (2012) from the Massachusetts Institute of Technology (MIT). She is currently pursuing her Ph.D in the Department of EECS at the University of California, Berkeley. Cathy is a fellow with the NSF Graduate Research Fellowship Program and a winner of the Chancellor’s Fellowship for Graduate Study at UC Berkeley. She is also an awardee of the NDSEG fellowship and the Dwight David Eisenhower Graduate Fellowship. Her current research interests are at the intersection of optimization, statistics, cyber-physical systems, transportation, and robotics. Cathy particularly enjoys working between fields to identify and solve problems with direct positive societal impact. She also enjoys creating art, synthesizing treasure from trash, automated gardens, and personal data analytics.

Position: Ph.D. Candidate

Current Institution: Stanford University

Abstract:
Developing an Instrumented Mouthguard Sensor to Study Mild Traumatic Brain Injury

Mild traumatic brain injuries, commonly known as concussions, not only cause acute debilitating symptoms, but may also lead to long-term neurodegeneration. Due to heightened awareness of this problem, many companies and researchers have developed head impact sensors, some of which are advertised to quantify injury risks. Despite the hype, none of these sensors have been fully validated. In fact, a few sensors were shown to have >100% measurement errors. Thus, these sensors are far from ready as commercial injury risk predictors, and research data gathered using these sensors are questionable. A rigorously validated, accurate head impact sensor is needed. In our lab, we developed an instrumented mouthguard that is able to 1) capture head motion dynamics relevant to injury, 2) couple tightly to the skull for accurate measurement of skull kinematics, and 3) detect head impacts on the field with high sensitivity and specificity. Using human injury data gathered by this instrument, we found that brain deformation measures such as strain and strain rate may be better injury predictors than traditional skull acceleration measures. We also discovered that helmeted impacts on the field may be exciting a resonance of the brain and amplifying brain-skull relative motion. In addition, we studied brain injury risks in other activities, such as roller coaster rides, and found that they may lead to brain deformations on a similar level as mild sports impacts. In the near future, we hope to widely disseminate this technology to gather a large amount of human data for injury mechanism research. Once we know the link between the mechanical input and neurological deficit, we can further develop the sensor into a real-time injury screening device.

Bio:

I am a 4th year PhD student in the Bioengineering Department at Stanford University. In Dr.
David Camarillo’s Smart Biomedical Devices lab, my research focus is to develop a novel
instrumented mouthguard sensor to study mild traumatic brain injury. My anticipated graduation date is June 2017, and I would love to pursue a career in academia post-graduation.

I have always been passionate about biomedical research, since I am interested in electrical engineering and bioengineering, and feel strongly about doing research that may directly benefit healthcare. I completed my undergraduate education at the University of Toronto, majoring in Biomedical Engineering. In my second year, I worked in Dr. Yu Sun’s lab, developing a microfluidic system that aspirates cells to determine their mechanical properties. After year three, I interned at an MRI coil company called Sentinelle Medical as an electrical engineer, to develop imaging coils for breast cancer detection. For my undergraduate thesis, I learned and applied human factors engineering techniques to redesign the control interface for radiotherapy delivery systems. Through these experiences, I gained insight and developed skills in multiple aspects of medical devices research both in academia and in industry.


Coming to Stanford, I joined Dr. Camarillo’s lab, hoping to continue to apply my engineering
skills to better understand and solve healthcare problems. I came across the topic of traumatic
brain injury, and have strived to gain a better understanding of this ‘silent epidemic’ in my PhD
work. Upon joining the lab, I started developing an instrumented mouthguard device that contains inertial sensors to measure head motion during dangerous sports impacts. Using machine learning techniques, I developed a smart head impact classifier that can detect dangerous head motion on the field. Working closely with Stanford Athletics, I deployed instrumented mouthguards to the Stanford football team and collected a large human dataset. From this dataset, we identified potential mechanisms of concussion and promising injury risk predictors.

In my future research, I hope to continue to apply engineering techniques to study and improve human health. I plan to develop different wearable devices that can help gather data to characterize human diseases, and use data mining and machine learning approaches to analyze such data. I am especially interested in complex systems such as the brain, in relation to widely prevalent diseases including concussions and neurodegenerative disorders.

Being born in China and having spent most of my teenage years in Canada, I call many places home, including sunny California – where I am now. Aside from my research interests, I am also a passionate photographer. Just like in research, I use the camera to identify the beauty and balance in this world, and would love to continue exploring and making discoveries.

Position: Ph.D. Candidate

Current Institution: University of Illinois at Urbana-Champagne

Abstract:
Data-centric scheduling: novel algorithms, state space collapse and delay minimization in heavy traffic

Data-processing applications are posing increasingly significant challenges to scheduling in
today’s computing clusters. The presence of data induces an extremely heterogeneous cluster
where processing speed depends on the task-server pair. The situation is further complicated by ever-changing technologies of networking, memory, and software architecture. As a result, a suboptimal scheduling algorithm causes unnecessary delay in job completion and wastes system capacity.

We propose a versatile model featuring a multi-class parallel-server system that readily incorporates the different characteristics of a variety of systems. The model had been studied by Harrison, Williams and Stolyar. However, to achieve delay optimality in heavy-traffic with unknown arrival rate vectors has remained an open problem.

We propose a novel algorithm that achieves delay optimality with unknown arrival rates. This enables the application of the algorithm to data-centric clusters. New proof techniques were required including construction of an ideal load decomposition. To demonstrate the effectiveness of the algorithm, we implemented a Hadoop MapReduce scheduler and showed that it achieves >10X improvement over existing schedulers.

Bio:
Qiaomin Xie is a final-year Ph.D. candidate in the Department of Electrical and Computer Engineering at the University of Illinois Urbana-Champaign. She received a B.E. in Electronic Engineering from Tsinghua University in 2010. Her research focuses on scalability, multi-resource packing and scheduling of efficient data-centric systems. She is the recipient of the Yi-Min Wang and Pi-Yu Chung Research Award (2015), and best paper award from IFIP Performance Conference (2011).

Position: Postdoctoral Fellow

Current Institution: Carnegie Mellon University

Abstract:
Learning Semantic Frames for Natural Language Understanding

Automatic extraction of meaning representations from natural language text is crucial for applications such as question answering, knowledge base construction, and intelligent dialog agents. Semantic frames – structured representations of prototypical scenarios people talk about (e.g., Who does What to Whom, When, Where, and How) — have been widely used in Natural Language Processing (NLP) systems to represent content conveyed in text. Automatic extraction of semantic frames from text is challenging, as it requires reasoning about various kinds of semantic elements, e.g., entities, attributes, events, and relations, and the correct interpretation of their meanings often requires background knowledge and relevant context. My research addresses these two challenges by developing statistical models that can jointly reason about different types of semantic elements, while taking into account their semantic dependencies based on context and background knowledge. In contrast to existing approaches, our joint model predicts semantic frames in a unified framework instead of a pipeline of independent classifiers. This leads to state-of-the-art performance on various natural language understanding tasks, including event extraction (i.e., extracting what happened, who was involved, when, and where), event coreference resolution (i.e., predicting which event descriptions refer to the same event), and fine-grained opinion extraction (i.e., predicting the polarity of an opinion, its holder, and its target).

Bio:
Bishan Yang is a Post-doctoral Fellow at Carnegie Mellon University. Her research develops machine learning techniques for natural language understanding. She is currently working with Prof. Tom Mitchell on developing a machine reading system that automatically reads documents and makes predictions based on the meanings conveyed in text as well as background knowledge. She received her PhD from Cornell University in 2016. Her PhD thesis is on automatic extraction of opinions and events expressed in text. Prior to that, she received her BS and MS in Computer Science from Peking University, China. She is a recipient of the Olin Fellowship from Cornell University.

Email

Website

Position: Ph.D. Candidate/Graduate Research Assistant

Current Institution: Columbia University

Abstract:
Ultrahigh-resolution OCT imaging of Breast Tissue With Feature Differentiation

Light-based imaging approaches such as diffused optical tomography (DOT) now allow high resolution non-destructive and radiation-free visualization of breast tissue. However, a surgically biopsy is still required for further tumor treatment. In contrast, optical imaging guided minimal invasive procedure such as mammary ductoscopy offers a new access to the abnormality sites in situ to avoid the traditional surgical biopsy and treatment. Optical coherence tomography (OCT), the feasibility of which has been demonstrated in breast cancer imaging, has the advantage of rapid cellular-level visualization of tissue structures in three dimension as opposed to only surface images provided by conventional ductoscopy. Further on, OCT can be easily implemented in a small diameter needle-like probe for percutaneous procedure, opening the door for diagnostic and therapeutic breast cancer management that is not previously possible. By using an ultrahigh resolution OCT system, it is shown that heterogeneities in the breast tissue unveiled in the OCT images are well correlated with the corresponding histological analysis. In addition, different image processing approaches were developed for feature extraction for various breast tissue types, which can be useful for better visualization of OCT intensity data.

Bio:
Xinwen Yao is a fourth year PhD student at Department of Electrical Engineering in Columbia University, under Prof. Christine Hendon’s structure function imaging laboratory (SFIL). She received her Bachelor’s degree from Xiamen University (China), and Master’s degree in
Electrical Engineering from Columbia University. At SFIL, she is mainly focusing on three domains (i) design and implementation of instrumentation for ultrahigh resolution (UHR) optical coherence tomography (OCT) imaging (ii) application of OCT imaging for human myocardium and human breast cancer (iii) development of in vivo OCT imaging tool, with the ultimate goal of innovating in OCT imaging device and imaging processing methods to improve diagnosis and treatment of cardiac diseases and early stage breast cancer. Her work on UHR OCT has been
awarded as “Best Poster Award” at 2015 SPIE Biophotonics Summer School and “Student Poster Presentation Award Winner” at 2016 OSA Biomedical Optics Congress. Besides academic activities, she was also served as the Secretary of Columbia University SPIE/OSA student chapter in the year of 2015-2016 and is actively involved in various outreaching programs. She was also a winner of “Student Officer Travel Grant” at 2016 SPIE Photonic West.

Position: Ph.D. student

Current Institution: University of Washington

Abstract:
Automating Data Management and Storage for Reactive, Wide-area Applications with Diamond

Users of today’s popular wide-area apps (e.g., Twitter, Google Docs, and Words with Friends) no longer save and reload when updating shared data; instead, these applications are reactive, providing the illusion of continuous synchronization across mobile devices and the cloud. Maintaining this illusion presents a challenging distributed data management problem for application programmers. Modern reactive applications consist of widely distributed processes sharing data across mobile devices an cloud servers. These processes make concurrent data updates, can stop or fail at any time, and may be connected by slow or unreliable links. While distributed storage systems can provide persistence and availability, programmers still face the formidable challenge of synchronizing updates between application processes and distributed storage in a way that is fault-tolerant and consistent in a wide-area environment.

This talk presents Diamond, the first reactive data management service for wide-area applications. Diamond performs the following functions on behalf of the application: (1) it ensures that updates to shared data are consistent and durable, (2) it reliably coordinates and synchronizes shared data updates across processes, and (3) it automatically triggers reactive code when shared data changes so that processes can perform appropriate tasks. For example, when a user makes an update from one device (e.g., a move in a multi-player game), Diamond persists the update, reliably propagates it to other users’ devices, and transparently triggers application code on those devices to react to the changes.

Reactive data management in the wide-area context requires delicate balancing; thus, Diamond implements the difficult mechanisms required by these applications (such as logging and concurrency control), while allowing programmers to focus on high-level data-sharing requirements (e.g. atomicity, concurrency, and data layout). Diamond introduces three new concepts:

* Reactive Data Map (rmap), a primitive that lets applications create reactive data types — shared, persistent in-memory data structures –and map them into Diamond so it can automatically synchronize them across distributed processes and persistent storage.

* Reactive Transactions, an interactive transaction type that automatically re-executes in response to shared data updates. Unlike materialized views, or database triggers, these “live” transactions run application code to perform local, application-specific functions (e.g., UI changes).

* Data-type Optimistic Concurrency Control (DOCC), a concurrency control mechanism that leverages data-type semantics to concurrently commit transactions executing commutative operations (e.g., writes to different list elements, increments to a counter). Our experiments show that DOCC is critical to coping with wide-area latencies, reducing abort rates by up to 5x.

We designed and implemented a Diamond prototype in C++ with language bindings for C++, Python and Java on both x86 and Android platforms. To evaluate Diamond, we built and measured both Diamond versions and custom versions (using explicit data management) of four reactive apps. Our experiments show that Diamond significantly reduces the complexity and size of reactive applications, provides strong transactional guarantees that eliminate common data races, and supports automatic reactivity with performance close to custom-written reative apps.

Bio:

I am a fourth year PhD student, working with Hank Levy and Arvind Krishnamurthy in the Computer Systems Lab at the University of Washington. My PhD research focuses on distributed systems for large-scale applications with two main directions: (1) distributed programming platforms for mobile-cloud applications and (2) high-performance transactional storage for datacenter applications.

Sapphire [1] is a new distributed programming platform that provides customizable and
extensible deployment of mobile/cloud applications. Sapphire’s key design feature is its
distributed runtime system, which supports a flexible and extensible deployment layer for solving complex distributed systems tasks, such as fault-tolerance, code-offloading, and caching. Rather than writing distributed systems code, programmers choose deployment managers that extend Sapphire’s kernel to meet their applications’ deployment requirements. In this way, each application runs on an underlying platform that is customized for its own distribution needs.

TAPIR [2] is a new protocol for distributed transactional storage systems that enforces a linearizable transaction ordering using a replication protocol with no ordering at all. The key insight behind TAPIR is that existing transactional storage systems waste work by layering a strong transaction protocol on top of a strong replication protocol. Instead, we designed inconsistent replication (IR), the first replication protocol to provide fault-tolerance with no consistency guarantees. TAPIR- the Transactional Application Protocol for Inconsistent Replication – provides linearizable transactions using IR. By enforcing strong consistency only in the transaction protocol, TAPIR can commit transactions in a single round-trip and order distributed transactions without centralized coordination.

Before starting my PhD, I worked for three years at VMware in the virtual machine monitor group on virtual machine checkpointing. My work on Halite [3] used working set estimation to improve the performance of restoring virtual machines on the VMware hypervisor.

I received my S.B. in computer science from MIT in 2008 and my M. Eng. In 2009. For my M. Eng., I worked with Fraus Kaashoek and Robert Morris on flexible, wide-area distributed storage system [4].

[1] Customizable and Extensible Deployment for Mobile/Cloud Applications. I. Zhang, A. Szekeres, D. Van Aken, I. Ackerman, S.D. Gribble, A. Krishnamurthy, and H. M. Levy. Proceedings of the USENIX Symposium on Operating Systems Design and Implementation (OSDI). October 2014

[2] Building Consistent Transactions with Inconsistent Replication. I. Zhang, N.K. Sharma, A. Szekeres, A. Krishnamurhty, and D.R. K. Ports. Proceedings of the ACM Symposium on Operating Systems Principles (SOP). October 2015.

[3] Optimizing VM Checkpointing for Restore Performance in VMware ESXi. I. Zhang, T. Denniston, Y. Baskakov, and A. Garthwaite. Proceedings of the USENIX Annual Technical Conference (ATC). June 2013.

[4] Flexible, Wide-Area Storage for Distributed Systems with WheelFS. J. Stribling, Y. Sovran, I. Zhang, X. Pretzer, J. Li, M.F. Kaashoek, and R. Morris. Proceedings of the USENIX Symposium on Networked Systems Design and Implementation (NSDI). April 2009.

Position: Postdoctoral Research Associate

Current Institution: Princeton University

Abstract:
Viability of Cloud Services

Cloud computing, due to its Infrastructure-as-a-Service (IaaS) nature, has revolutionized the way that computing resources are utilized: they are generally virtualized in units of instances associated with remote virtual machines with specified amounts of CPU, memory, storage, and other attributes, and users can then benefit from renting these instances by the hour, eliminating setup and maintenance costs for the physical machines. With the growth of cloud services, cloud providers face an increasingly complicated problem of allocating their resources to different users: user demands are highly dynamic as jobs are submitted and completed at different times, making it difficult for the cloud providers to maintain a consistent quality of experience (QoE). These resource allocations must take into account both available capacity within datacenter networks as well as individual jobs’ required instance hours and interruptibility, imposing new types of constraints on the operator’s ability to route jobs among its datacenters and manage fluctuating user demands. We approach the viable solutions for cloud services by using the price incentive to shape user behavior. In the talk, I will present an auction-based spot pricing (that is published at SIGCOMM 2015) and the viability of a cloud virtual service provider (that is published at SIGMETRICS 2016).

Bio:

Liang Zheng is currently a Postdoctoral Research Associate with the Department of Electrical Engineering, Princeton University. She received the Ph.D. degree in computer science from the City University of Hong Kong, Hong Kong, in 2015, and the bachelor’s degree in software engineering from Sichuan University, Chengdu, China, in 2011. Her research interests are primarily in using data analytics to understand user behavior in computing/networked systems, particularly from an economic perspective. She received the First-class Student Research Excellence Award from the College of Science and Engineering in 2014, and was a Finalist of Microsoft Research Asia Fellowship in 2013.

Position: Ph.D. student

Current Institution: University of Illinois at Urbana-Champaign

Abstract:
Enforcing Customizable Consistency Properties in Software-Defined Networks

It is critical to ensure that network policy remains consistent during state transitions. However, existing techniques impose a high cost in update delay, and/or FIB space. We propose the Customizable Consistency Generator (CCG), a fast and generic framework to support customizable consistency policies during network updates. CCG effectively reduces the task of synthesizing an update plan under the constraint of a given consistency policy to a verification problem, by checking whether an update can safely be installed in the network at a particular time, and greedily processing network state transitions to heuristically minimize transition delay. We show a large class of consistency policies are guaranteed by this greedy heuristic alone; in addition, CCG makes judicious use of existing heavier-weight network update mechanisms to provide guarantees when necessary. As such, CCG nearly achieves the “best of both worlds”: the efficiency of simply passing through updates in most cases, with the consistency guarantees of
more heavyweight techniques. Mininet and physical testbed evaluations demonstrate CCG’s capability to achieve various types of consistency, such as path and bandwidth properties, with zero switch memory overhead and up to a 3 delay reduction compared to previous solutions.

Bio:
Wenxuan Zhou is a Ph.D. student in Computer Science at the University of Illinois at Urbana-Champaign (UIUC), advised by Prof. Matthew Caesar. Her research focuses on network verification and synthesis, with an emphasis on software-defined networks, data centres, and enterprise networks. She received her Bachelor’s degree in Electronic Engineering from Beijing University of Aeronautics and Astronautics, China, and her Master’s degree in Computer Science from UIUC.

Position: Ph.D. Candidate

Current Institution: Princeton University

Abstract:
Highly Configurable Architecture for the Cloud

Businesses and Academics are increasingly turning to Infrastructure as a Service (IaaS) Clouds to fulfill their computing needs. Unfortunately, current IaaS systems provide a severely restricted pallet of rentable computing options which do not optimally fit the workloads that they are executing. We address this challenge by proposing a highly configurable architecture that encompasses various aspects (the Sharing Architecture, MITTS, CASH) of computer architecture. We design and evaluate a manycore architecture, called the Sharing Architecture [ASPLOS 2014], specifically optimized for IaaS systems by being reconfigurable on a sub-core basis. The Sharing Architecture enables better matching of workload to micro-architecture resources by replacing static cores with Virtual Cores which can be dynamically reconfigured to have different numbers of ALUs and amount of Cache. While memory bandwidth has become a critical resource in multicore and manycore processors, current IaaS Clouds lack the ability to provision memory bandwidth on a per-customer basis according to customers’ need and payment. MITTS (Memory Inter-arrival Time Traffic Shaping) [ISCA 2016] is a distributed hardware mechanism which limits memory traffic at the source (Core or LLC). MITTS shapes memory traffic based on memory request inter-arrival time using novel hardware, enabling fine-grain bandwidth allocation. In an IaaS system, MITTS enables Cloud customers to express their memory distribution needs and pay commensurately. In a general purpose multicore program, MITTS can be used to optimize for memory system throughput and fairness. MITTS has been implemented in the 32nm 25-core Princeton Piton Processor [HotChip 2016], as well as the open source OpenPiton [ASPLOS 2016] processor framework. The Sharing Architecture and MITTS provide fine-grain hardware configurability, which improves economic efficiency in IaaS Clouds. However, cloud customers must determine how to use such fine-grain configurable resources to meet quality-of-service (QoS) requirements while minimizing cost. This is especially challenging for non-savvy customers. We propose CASH [ISCA 2016], a runtime system that uses a combination of control theory and machine learning to configure the architecture such that QoS requirements are met and the cost is minimized. The presentation will be about these three major elements (The Sharing Architecture, MITTS, and CASH) along with my thesis project “COMPAC: Composable Hardware Accelerators”, which addresses using composable hardware accelerators in the IaaS Cloud. I will present how to co-design the hardware and software in a highly configurable manner to improve IaaS Cloud economic efficiency.

Bio:
I am the first graduate student of Prof. David Wentzlaff. My research area is computer architecture, operating system, and parallel computing. I got my Bachelor’s degrees in Electrical Engineering, Computer Engineering, and Mathematics from University of Michigan and Shanghai Jiao Tong. I worked at Microsoft Research as a research intern for two summers. Apart from research, I like playing tennis, basketball, swimming, yoga, to name a few . And I love both classical music and pop music. I have been playing the violin for over ten years.