Alberto Gonzalez

PhD. ICS Student at UH Manoa
View Alberto González Martínez's LinkedIn profile
Lava Image

Hi, welcome! I am Alberto Gonzalez.

I am an Spanish student at the University of Hawaii at Manoa, currently studying a PhD in Computer Science working in the Lava Labortory. I have been studying and researching in the University for the last ten years while also working at the same time, for several years, full-time at a Multinational Company in Spain, deploying Air-control systems. I have a Bachelor in Electronics Engineering as well as an MS. in Control and Automation Engineering and another MS. in Computer Science. I have always been passionate about understanding the state of the art technology, developing gadgets and products. I am really interested in Multidisciplinary projects that are able to mix Life Sciences with Computer Science and Engineering.

Right now my main research and work focus is based on visualization systems, data analysis, Machine Learning and Artificial Intelligence. For a better understandig of the environment that we live in and in order to create better technologies for the World.

You can check my most recent projects and my past experience browsing through the tabs of the menu

Most of my most recent work is related with visualization and data analytics. I am very interested in developing tools and techniques that can help data scientists solve problems faster and better, allowing them to work in collaborative environments and making use of the new available tv resolutions.
Recently I have been working on:
Articulate Project: Automatic translation of natural language queries into visualizations.
NetSage Project: Open privacy-aware network measurement, analysis, and visualization service designed to address the needs of today's international networks.

I also make part of my research with Virtual Reality. Deriving insights from data by using powerful visualization metaphors and interaction techniques which could augment the human abality to analyze and make sense of heterogeneous, multivariate, often noisy datasets that are common in many scientific disciplines and business environments.

"Humans are not disabled. A person can never be broken. Our built environment, our technologies are broken and disabled. We the people need no accept our limitations but can transcend disability with technological innovation"

Hugh Herr

I have developed websites for different purposes. From working with dynamic visualizations to websites holding repositories of records. This has let me worked with some frameworks such as Django as well as with multiple libraries such as: Jquery, Bootstrap, D3, KineticJs to mention some examples.

Check the slides to see some of this projects, go the the websites or watch the videos.
I have always been fascinated about robotics. It mixes a bit of all the engineering disciplines together and probably is one of the things that has made me study a Computer Science MS after finishing my Control Engineering MS. in order to be more multidisciplinary

I have worked with ROS, using Python and Java modules to develop a robotic platform for testing a control architecture based on modeling the knowledge of the system. (The architecture was not mine) The robot was able to patrol or navigate to a desired point autonomously. It was able to recover from mechanical failures such as a broken laser and continue navigating by turning on another range sensor such as a kinect device.

I have also implemented some simulations for this project using diferent simulators like Gazebo and Player/Stage You can see some of the images and videos from this project in the slides.


Kawano, N., Theriot, R., Lam, J., Wu, E., Guagliardo, A., Kobayashi, D., Gonzalez, A., Uchida, K., Leigh, J.
The Destiny-class CyberCANOE–a surround screen, stereoscopic, cyber-enabled collaboration analysis navigation and observation environment
IS&T Electronic Imaging 2017, Engineering Reality of Virtual Reality, 2017.
January 2017

The Destiny-class CyberCANOE is a hybrid-reality environment that provides 20/20 visual acuity in a 13-foot-wide, 320-degree cylindrical structure comprised of tiled passive stereo-capable organic light emitting diode (OLED) displays. Hybrid-reality systems such as Destiny, CAVE2, WAVE and the TourCAVE combine surround-screen virtual reality environments with ultra-high-resolution digital project-rooms. They are intended as collaborative environments that enable multiple users to work minimally encumbered, and hence comfortably, for long periods of time in rooms surrounded by data in the form of visualizations that benefit from being displayed at resolutions matching visual acuity and in stereoscopic 3D. Destiny is unique in that: it is the first hybrid-reality system to use OLED displays; it uses a real-time software-based approach rather than a physical optical approach for minimizing stereoscopic crosstalk when images are viewed severely off-axis on polarized stereoscopic displays; and it used Microsoft’s HoloLens augmented reality display to prototype its design and aid in its construction. This paper will describe Destiny’s design and implementation - in particular the technique for software-based crosstalk mitigation. Lastly it will describe how the HoloLens helped validate Destiny’s design as well as train the construction team in its assembly.

Kumar, A., Aurisano, J., Di Eugenio, B., Johnson, A., Gonzalez, A., Leigh, J.
Articulate2: Toward a Conversational Interface for Visual Data Exploration.
Poster Presented at the Information Visualization Conference at IEEE VisWeek in Baltimore
October 2016

InfoVis novices struggle with visualization construction. Even with the aid of visualization software, such users may face challenges when translating their questions into appropriate visual encodings, or interactively refining the representation to achieve a desired result. A ‘conversational interface’ which maintains a dialog with the user through natural language and gestures, could allow users to engage in repeated cycles of visualization generation and modification, asking questions directly through speech. In this poster we present a prototype conversational visual data analysis system. Our prototype was developed from a corpus consisting in 15-subjects engaging in exploratory data visualization with a simulated conversational interface. It features 1) speech to visualization pipeline, 2) classification system to divide utterances into major types, 3) history manager and knowledge-base.

J. Aurisano, A. Kumar, A. Gonzalez, J Leigh, B. DiEugenio, A. Johnson.
Towards a Dialogue System that Supports Rich Visualizations of Data
The 17th Annual SIGdial Meeting on Discourse and Dialogue (SIGDIAL 2016), Los Angeles, CA
September 2016

The goal of our research is to support full-fledged dialogue between a user and a system that transforms the user queries into visualizations. So far, we have collected a corpus where users explore data via visualizations; we have annotated the corpus for user intentions; and we have developed the core NL-to-visualization pipeline

Alberto Gonzalez, Jason Leigh, Sean Peisert, Brian Tierney, Andrew Lee, and Jennifer M. Schopf
NetSage – Measurement and Monitoring for International Links
TNC16 Conference 12-16 June 2016, Prague, Czech Republic
June 2016

NetSage is a project to develop a unified open, privacy-aware network measurement, and visualization service to address the needs of today’s international networks. Modern science is increasingly data-driven and collaborative in nature, producing petabytes of data that can be shared by tens to thousands of scientists all over the world. The National Science, Foundation-supported International Research Network Connections (IRNC) links, have been essential to performing these science experiments. Recent deployment of Science DMZs [Dart, E. et al., 2013], both in the US and other countries, is starting to raise expectations for data throughput performance for wide-area data transfers. New capabilities to measure and analyze the capacity of international wide-area networks are essential to ensure end-users are able to take full advantage of such infrastructure. NetSage will provide the network engineering community, both US domestic and international, with a suite of tools and services to more deeply understand: 1) the current traffic patterns across IRNC links, and anticipate growth trends for capacity-planning purposes; 2) the main sources and sinks of large, elephant flows to know where to focus outreach and training opportunities; and 3) the cause of packet losses in the links and how they impact end-to- end performance.

Aurisano, J., Kumar, A., Gonzalez, A., Reda, K., Leigh, J., DiEugenio, B., Johnson, A.
“Show me data” Observational study of a conversational interface in visual data exploration
IEEE Visualization 2015, Chicago, IL, Honorable Mention
October 2015

A natural language interface to visual data exploration would allow a user to directly specify questions through speech, allowing the user to focus on higher-order tasks, such as hypothesis generation and question formulation. However, visual data exploration involves repeated cycles of visualization construction and interaction, as well as reasoning across many visualizations generated over the course of an exploratory session. A ’conversational interface’, which maintains a dialog with the user through natural language and gestures, could support these complex tasks.We conducted an observational, exploratory study to observe the interaction between a subject and a remote data analysis expert (DAE) who assists the subject in an exploratory data analysis task.

K. Reda, A. Gonzalez, J. Leigh, M. Papka.
Tell Me What Do You See: Detecting and Summarizing Perceptually-Separable Patterns for Exploratory Data Analysis
IEEE Visualization 2015, Chicago, IL
October 2015

K. Reda, A. Gonzalez, J. Leigh, M. Papka.
Tell Me What Do You See: Detecting perceptually-separable visual patterns via clustering of image-space features in visualizations
8th Annual Postdoctoral Research Symposium, Argonne National Laboratory, Argonne, IL, Robert G. Sachs award, 1st place
October 2015

Visualization helps users infer structures and relationships in the data by encoding information as visual features that can be processed by the human visual-perceptual system. However, users would typically need to expend significant effort to scan and analyze a large number of views before they can begin to recognize relationships in a visualization. We propose a technique to partially automate the process of analyzing visualizations. By deriving and analyzing image-space features from visualizations, we can detect perceptually-separable patterns in the information space. We summarize these patterns with a tree-based meta-visualization and present it to the user to aid exploration. We illustrate this technique with an example scenario involving the analysis of census data.