Faculty Recruiting Support CICS

Recent Computer Science PH.D. Graduates (February and May 2016)

Charles Curtsinger; Effective Performance Analysis and Debugging; Emery Berger, Advisor; May 2016; Assistant Professor, Department of Computer Science, Grinnell College

Abstract: Performance is once again a first-class concern. Developers can no longer wait for the next generation of processors to automatically "optimize" their software. Unfortunately, existing techniques for performance analysis and debugging cannot cope with complex modern hardware, concurrent software, or latency-sensitive software services.

While processor speeds have remained constant, increasing transistor counts have allowed architects to increase processor complexity. This complexity often improves performance, but the benefits can be brittle; small changes to a program's code, inputs, or execution environment can dramatically change performance, resulting in unpredictable performance in deployed software and complicating performance evaluation and debugging. Developers seeking to improve performance must resort to manual performance tuning for large performance gains. Software profilers are meant to guide developers to important code, but conventional profilers do not produce actionable information for concurrent applications. These profilers report where a program spends its time, not where optimizations will yield performance improvements. Furthermore, latency is a critical measure of performance for software services and interactive applications, but conventional profilers measure only throughput. Many performance issues appear only when a system is under high load, but generating this load in development is often impossible. Developers need to identify and mitigate scalability issues before deploying software, but existing tools offer developers little or no assistance.

In this dissertation, I introduce an empirically-driven approach to performance analysis and debugging. I present three systems for performance analysis and debugging. Stabilizer mitigates the performance variability that is inherent in modern processors, enabling both predictable performance in deployment and statistically sound performance evaluation. Coz conducts performance experiments using virtual speedups to create the effect of an optimization in a running application. This approach accurately predicts the effect of hypothetical optimizations, guiding developers to code where optimizations will have the largest effect. Amp allows developers to evaluate system scalability using load amplification to create the effect of high load in a testing environment. In combination, Amp and Coz allow developers to pinpoint code where manual optimizations will improve the scalability of their software.

Weize Kong; Extending Faceted Search to the Open-Domain Web; James Allan, Advisor; May 2016; Software Engineer, Google

Abstract: Faceted search enables users to navigate a multi-dimensional information space by combining keyword search with drill-down options in each facets. For example, when searching "computer monitor"' in an e-commerce site, users can select brands and monitor types from the the provided facets {"Samsung", "Dell", "Acer", ...} and {"LET-Lit", "LCD", "OLED", ...}. It has been used successfully for many vertical applications, including e-commerce and digital libraries. However, this idea is not well explored for general web search in an open-domain setting, even though it holds great potential for assisting multi-faceted queries and exploratory search.

The goal of this work is to explore this potential by extending faceted search into the open-domain web setting, which we call Faceted Web Search. We address three fundamental issues in Faceted Web Search, namely: how to automatically generate facets (facet generation); how to re-organize search results with users' selections on facets (facet feedback); and how to evaluate generated facets and entire Faceted Web Search systems.

In conventional faceted search, facets are generated in advance for an entire corpus either manually or semi-automatically, and then recommended for particular queries in most of the previous work. However, this approach is difficult to extend to the entire web due to the web's large and heterogeneous nature. We instead propose a query-dependent approach, which extracts facets for queries from their web search results. We further improve our facet generation model under a more practical scenario, where users care more about precision of presented facets than recall.

The dominant facet feedback method in conventional faceted search is Boolean filtering, which filters search results by users' selections on facets. However, our investigation shows Boolean filtering is too strict when extended to the open-domain setting. Thus, we propose soft ranking models for Faceted Web Search, which expand original queries with users' selections on facets to re-rank search results. Our experiments show that the soft ranking models are more effective than Boolean filtering models for Faceted Web Search.

To evaluate Faceted Web Search, we propose both intrinsic evaluation, which evaluates facet generation on its own, and extrinsic evaluation, which evaluates an entire Faceted Web Search system by its utility in assisting search clarification. We also design a method for building reusable test collections for such evaluations. Our experiments show that using the Faceted Web Search interface can significantly improve the original ranking if allowed sufficient time for user feedback on facets.

Kriste Krstovski; Efficient Inference, Search and Evaluation for Latent Variable Models of Text with Applications to Information Retrieval and Machine Translation; David A. Smith, Advisor; May 2016; Postdoctoral Research Scientist, Columbia University & University of Chicago

Abstract: Latent variable models of text, such as topic models, have been explored in many areas of natural language processing, information retrieval and machine translation to aid tasks such as exploratory data analysis, automated topic clustering and finding similar documents in mono- and multilingual collections. Many additional applications of these models, however, could be enabled by more efficient techniques for processing large datasets.

In this thesis, we introduce novel methods that offer efficient inference, search and evaluation for latent variable models of text. We present efficient, online inference for representing documents in several languages in a common topic space and fast approximations for finding near neighbors in the probability simplex representation of mono- and multilingual document collections. Empirical evaluations show that these methods are as accurate as --- and significantly faster than --- Gibbs sampling and brute-force all pairs search respectively. In addition, we present a new extrinsic evaluation metric that achieves very high correlation with common performance metrics while being more efficient to compute. We showcase the efficacy and efficiency of our new approaches on the problems of modeling and finding similar documents in a retrieval system for scientific papers, detecting document translation pairs, and extracting parallel sentences from large comparable corpora. This last task, in turn, allows us to efficiently train a translation model from comparable corpora that outperforms a model trained on parallel data.

Lastly, we improve the latent variable model representation of large documents in mono- and multilingual collections by introducing online inference for topic models with hierarchical Dirichlet prior structure over textual regions such as document sections. Modeling variations across textual regions using online inference offers a more effective and efficient document representation, beyond a bag of words, which is usually a handicap for the performance of these models on large documents.

Bo Liu; Algorithms for First-Order Sparse Reinforcement Learning; Sridhar Mahadevan, Advisor; Feb. 2016; Researcher, Philips Research

Abstract: This thesis presents a general framework for first-order temporal difference learning algorithms with an in-depth theoretical analysis. The main contribution of the thesis is the development and design of a family of first-order regularized temporal-difference (TD) algorithms using stochastic approximation and stochastic optimization. To scale up TD algorithms to large-scale problems, we use first-order optimization to explore regularized TD methods using linear value function approximation. Previous regularized TD methods often use matrix inversion, which requires cubic time and quadratic memory complexity. We propose two algorithms, sparse-Q and RO-TD, for on-policy and off-policy learning, respectively. These two algorithms exhibit linear computational complexity per-step, and their asymptotic convergence guarantee and error bound analysis are given using stochastic optimization and stochastic approximation. The second major contribution of the thesis is the establishment of a unified general framework for stochastic-gradient-based temporal-difference learning algorithms that use proximal gradient methods. The primal-dual saddle-point formulation is introduced, and state-of-the-art stochastic gradient solvers, such as mirror descent and extragradient are used to design several novel RL algorithms. Theoretical analysis is given, including regularization, acceleration analysis and finite-sample analysis, along with detailed empirical experiments to demonstrate the effectiveness of the proposed algorithms.


Huong Phan; An Incremental Approach To Identifying Causes of System Failures Using Fault Tree Analysis; Lori Clarke, George Avrunin, Advisors; May 2016; Software Engineer, Google

Abstract: This work presents a systematic, incremental approach to identifying causes of potential failures in complex systems. The approach builds upon Fault Tree Analysis (FTA), but enhances previous work to deliver better results. FTA has been applied in a number of domains to determine what combinations of events might lead to a specified undesired event that represents a system failure. Given an undesired event, FTA constructs a fault tree (FT) and computes its cut sets, the sets of events that together could cause the undesired event. Such cut sets provide valuable insights into how to improve the design of the system being analyzed to reduce the likelihood of the failure. Manual FT construction can be tedious and error-prone. Previous approaches to automatic FT construction are limited to systems modeled in specific modeling languages and often fail to recognize some important causes of failures. Also, these approaches tend to not provide enough information to help users understand how the events in a cut set could lead to the specified undesired event and, at the same time, often provide too many cut sets to be helpful, especially when systems are large and complex.

Our approach to identifying causes of potential system failures is incremental and consists of two phases that support selective exploration. In the first phase, a high-level FT, called the initial FT, is constructed based on the system's data and control dependence information and then the initial FT's cut sets, called the initial cut sets, are computed. In the second phase, users select one initial cut set for more detailed analysis. In this detailed analysis, additional control dependence information is incorporated and error combinations are considered to construct a more detailed FT, called the elaborated FT, that focuses on the chosen initial cut set. The cut sets of the elaborated FT, called the elaborated cut sets, are then computed, and concrete scenarios are generated to show how events in each of those elaborated cut sets could cause the specified undesired event. Our approach is applicable to any system model that incorporates control and data dependence information. The approach also improves the precision of the results by automatically eliminating some inconsistent and spurious cut sets.

Christopher Vigorito; Intrinsically Motivated Exploration in Hierarchical Reinforcement Learning; Andrew Barto, Advisor; Feb. 2016; Research Engineer, Osaro, Inc.

Abstract: The acquisition of hierarchies of reusable skills is one of the distinguishing characteristics of human intelligence, and the learning of such hierarchies is an important open problem in computational reinforcement learning (RL). In humans, these skills are learned during a substantial developmental period in which individuals are intrinsically motivated to explore their environment and learn about the effects of their actions. The skills learned during this period of exploration are then reused to great effect later in life to solve many unfamiliar problems very quickly. This thesis presents novel methods for achieving such developmental acquisition of skill hierarchies in artificial agents by rewarding them for using their current skill set to better understand the effects of their actions on unfamiliar parts of their environment, which in turn leads to the formation of new skills and further exploration, in a life-long process of hierarchical exploration and skill learning.

In particular, we present algorithms for intrinsically motivated hierarchical exploration of Markov Decision Processes (MDPs) and finite factored MDPs (FMDPs). These methods integrate existing research on temporal abstraction in MDPs, intrinsically motivated RL, hierarchical decomposition of finite FMDPs, Bayesian network structure learning, and information theory to achieve long-term, incremental acquisition of skill hierarchies in these environments. Moreover, we show that the skill hierarchies learned in this fashion afford an agent the ability to solve novel tasks in its environment much more quickly than solving them from scratch.

To apply these techniques to environments with representational properties that differ from traditional MDPs and finite FMDPs requires methods for incrementally learning transition models of environments with such representations. Taking a step in this direction, we also present novel methods for incremental model learning in two other types of environments. The first is an algorithm for online, incremental structure learning of transition functions for FMDPs with continuous-valued state and action variables. The second is an algorithm for learning the parameters of a predictive state representation, which serves as a model of partially observable dynamical systems with continuous-valued observations and actions. These techniques serve as a prerequisite to future work applying intrinsically motivated skill learning to these types of environments.

Pengyu Zhang; Leveraging Backscatter for Ultra-low Power Wireless Sensing Systems; Deepak Ganesan, Advisor; May 2016; Postdoctoral Researcher, Department of Computer Science, Stanford University

Abstract:  The past few years have seen a dramatic growth in wireless sensing systems, with millions of wirelessly connected sensors becoming first-class citizens of the Internet. The number of wireless sensing devices is expected to surpass 6.75 billion by 2017, more than the world's population as well as the combined market of smartphones, tablets, and PCs. However, its growth faces two pressing challenges: battery energy density and wireless radio power consumption. Battery energy density looms as a fundamental limiting factor due to slow improvements over the past several decades (3x over 22 years). Wireless radio power consumption is another key challenge because high-speed wireless communication is often far more expensive energy-wise than computation, storage and sensing. To make matters worse, wireless sensing devices are generating an increasing amount of data. These challenges raise a fundamental question --- how should we power and communicate with wireless sensing devices. More specifically, instead of using batteries, can we leverage other energy sources to reduce, if not eliminate, the dependence on batteries? Similarly, instead of optimizing existing wireless radios, can we fundamentally change how radios transmit wireless signals to achieve lower power consumption? A promising technique to address these questions is backscatter --- a primitive that enables RF energy harvesting and ultra-low-power wireless communication. Backscatter has the potential to reduce dependence on batteries because it can obtain energy by rectifying the wireless signals transmitted by a backscatter reader. Backscatter can also work by reflecting existing wireless signals (WiFi, BLE) when these are available nearby. Because signal reflection only consumes uWs of power, backscatter can enable ultra-low-power wireless communication. However, the use of backscatter for communicating with wireless sensing devices presents several challenges. First, decreasing RF power across distance limits the operational range of micro-powered backscatter devices. This raises the question of how to maintain a communication link with a backscatter device despite tiny amount of harvested power. Second, even though the backscatter RF front-end is extremely power-efficient, the computational and sensing overhead on backscatter sensors limit its ability to operate with a few micro-Watts of power. Such overhead is a negligible factor of overall power consumption for platforms where radio power consumption is high (e.g. WiFi or Bluetooth based devices). However, it becomes the bottleneck for backscatter based platforms. Third, backscatter readers are not currently deployed in existing indoor environments to provide a continuous carrier for carrying backscattered information. As a result, backscatter deployment is not yet widespread. This thesis addresses these challenges by making the following contributions. First, we design a network stack that enables continuous operation despite decreasing harvested power across distance by employing an OS abstraction --- task fragmentation. We show that such a network stack enables packet transfer even when the whole system is powered by a 3cmx3cm solar panel under natural indoor light condition. Second, we design a hardware architecture that minimizes the computational overhead of backscatter to enable over 1Mbps backscatter transmission while consuming less than 100uWs of power, a two order of magnitude improvement over the state-of-the-art. Finally, we design a system that can leverage both ambient WiFi and BLE signals for backscatter. Our empirical evaluation shows that we can backscatter 500bps data on top of a WiFi stream and 50kbps data on top of a Bluetooth stream when the backscatter device is 3m away from the commercial WiFi and Bluetooth receivers.


Yahan Zhou; Shape Design and Optimization for 3D Printing; Rui Wang, Advisor; Feb. 2016; Software Engineer, Google

Abstract: In recent years, the 3D printing technology has become increasingly popular, with wide-spread uses in rapid prototyping, design, art, education, medical applications, food and fashion industries. It enables distributed manufacturing, allowing users to easily produce customized 3D objects in office or at home. The investment in 3D printing technology continues to drive down the cost of 3D printers, making them more affordable to consumers.

As 3D printing becomes more available, it also demands better computer algorithms to assist users in quickly and easily generating 3D content for printing. Creating 3D content often requires considerably more efforts and skills than creating 2D content. In this work, I will study several aspects of 3D shape design and optimization for 3D printing. I start by discussing my work in geometric puzzle design, which is a popular application of 3D printing in recreational math and art. Given user-provided input figures, the goal is to compute the minimum (or best) set of geometric shapes that can satisfy the given constraints (such as dissection constraints). The puzzle design also has to consider feasibility, such as avoiding interlocking pieces. I present two optimization-based algorithms to automatically generate customized 3D geometric puzzles, which can be directly printed for users to enjoy. They are also great tools for geometry education.

Next, I discuss shape optimization for printing functional tools and parts. Although current 3D modeling software allows a novice user to easily design 3D shapes, the resulting shapes are not guaranteed to meet required physical strength. For example, a poorly designed stool may easily collapse when a person sits on the stool; a poorly designed wrench may easily break under force. I study new algorithms to help users strengthen functional shapes in order to meet specific physical properties. The algorithm uses an optimization-based framework -- it performs geometric shape deformation and structural optimization iteratively to minimize mechanical stresses in the presence of forces assuming typical use scenarios. Physically-based simulation is performed at run-time to evaluate the functional properties of the shape (e.g., mechanical stresses based on finite element methods), and the optimizer makes use of this information to improve the shape. Experimental results show that my algorithm can successfully optimize various 3D shapes, such as chairs, tables, utility tools, to withstand higher forces, while preserving the original shape as much as possible.

To improve the efficiency of physics simulation for general shapes, I also introduce a novel, SPH-based sampling algorithm, which can provide better tetrahedralization for use in the physics simulator. My new modeling algorithm can greatly reduce the design time, allowing users to quickly generate functional shapes that meet required physical standards.