The objective of this one-day workshop is to bring researchers and practitioners together in order to investigate opportunities in exploiting virtualized active (compute-enabled) technologies such as active memory, active network, and active storage for accelerating data-intensive workloads. Furthermore, this workshop aims at investigating issues in realizing active capabilities (enabled by hardware accelerators such as SSDs, GPUs, FPGAs, and ASICs) in the entire system stack running on cloud. The workshop aims at providing a forum for academia and industry to exchange ideas through research and position papers.
The recent popularity of Big Data applications has renewed interest in developing and optimizing data-intensive workloads. In addition to the traditional scientific computing domain, Big Data phenomena are observed in a variety of new domains such as Internet of Things (IoT), social analytic, personalized health and precision medicine, bioinformatics, energy informatics, and emergency response and disaster management. The Big Data workloads are characterized by unprecedented volumes and velocity of data, both historic and transient, very high-speed data flows (e.g., from sensor networks), and diverse variety of data from completely unstructured text documents, to structured relational tables or matrices, to photos and videos. Analyzing vast quantities of such complex data (partially powered by machine learning) is becoming as important as the traditional massive-scale data management.
Unfortunately, existing approaches to solve data-intensive problems are woefully inadequate to address the challenges raised by the Big Data applications. Specifically, these approaches require data to be processed to be moved near the computing resources. These data movement costs can be prohibitive for large data sets such as those observed in the aforementioned workloads. One way to address this problem is to bring virtualized computing resources closer to data, whether it is at rest or in motion. The premise of "active" systems is a new holistic view of the system in which every data medium (whether volatile or non-volatile) and every communication channel becomes compute-enabled.
Although prototypes of systems with active technologies are currently available, there is a very limited exploitation of their capabilities in real-life problems. The proposed workshop aims to evaluate different aspects of the active systems stack and understand the impact of active technologies (including but not limited to hardware accelerators such as SSDs, GPUs, FPGAs, and ASICs) on different applications workloads. Specifically, the workshop aims to understand the role of modern hardware to enable active medium (whether network, storage, or memory) over the entire path and the lifecycle of data, especially as today's database system opt for hierarchies of storage and memory. Furthermore, we aim to revisit the interplay between algorithmic modeling, compiler and programming languages, virtualized runtime systems and environments, and hardware implementations, for effective exploitation of active technologies.
Topics of Interest
Topics of interest include but are not limited to:
- Data Management Issues in Active Systems (e.g., active network, storage, and memory)
- Data Management Issues in Software-Hardware-System Co-design
- Active Technologies (e.g., SSDs, GPUs, FPGAs, and ASICs) in Co-design Architectures
- Query Orchestration and Execution Models in Co-design Architectures
- Enabling Partial Computation or Best Effort Computation in Co-design Architectures
- Offloading Computation to Accelerators in Co-processor Design
- Placing Accelerator on the Data Path in Co-placement Design
- Programming Methodologies for Data-intensive Workloads on Active Technologies
- Virtualizing Active Technologies on Cloud (e.g., Scalability and Security)
- Exploiting Active Technologies in Modern Databases (e.g., NoSQL and NewSQL)
- Extending Runtime of Big Data Systems (e.g., Spark, Hadoop) with Active Technologies
- Autonomic Tuning for Data Management Workloads in Co-design Architectures
- Algorithms and Performance Models for Active Memory and Storage Sub-systems
- Novel Applications of Low-Power Modern Processors, GPUs, FPGAs, and ASICs
- Novel Applications of Transactional Memory in Co-design Architectures
- Workload-aware System Co-design for Emerging Applications (e.g., Internet-of-Things, Personalized Health, and Precision Medicine)
- Rajesh R. Bordawekar (IBM T.J. Watson Research Center)
- Mohammad Sadoghi (Purdue University)
- Kaiwen Zhang (Technical University of Munich)
- Nipun Agarwal (Oracle)
- Spyros Blanas (Ohio State University)
- Khuzaima Daudjee (University of Waterloo)
- Peter M. Fischer (University of Freiburg)
- Blake G. Fitch (IBM Research, Zurich)
- Boris Glavic (Illinois Institute of Technology)
- Hans-Arno Jacobsen (Middleware Systems Research Group)
- Kajan Kanagaratnam (IBM, Toronto)
- Tirthankar Lahiri (Oracle)
- Mohammadreza Najafi (Technical University of Munich)
- Ilia Petrov (TU Darmstadt)
- Tilmann Rabl (TU Berlin)
- Tiark Rompf (Purdue University)
- Mohamed Sarwat (Arizona State University)
- Divesh Srivastava (AT&T Labs Research)
- Dina Thomas (Pure Storage)
- Stratis Viglas (University of Edinburgh)
- Paper submissions: December 10, 2016
- Notification to authors: January 10, 2017
- Camera-ready copy due: January 24, 2017
- Workshops: April 22, 2017
All submissions must be prepared according to the ICDE formatting guidelines. All accepted papers will be published in the ICDE proceedings and will also become publicly available through the IEEE Xplore. Submitted papers can be of two kinds:
- Regular Research Papers: These papers should report original research results or significant case studies. They should be at most 8 pages.
- Position Papers: These papers should report novel research directions or identify challenging problems. They should be at most 4 pages.
Papers have to be submitted electronically as PDF files via EasyChair.
For more information click "Further official information" below.
This opportunity has expired. It was originally published here: