SMART CHICAGO IS MOVING!!!

Good News!!! The Smart Chicago team is moving and now will be co-located with the City Digital Team at UI Labs. As such, our individual emails will be changing to:

Kyla Williams           

Sonja Marziano       

Denise Linn               

Leslie Durr               

Our new mailing address is 1415 N. Cherry Avenue Chicago, IL 60642 and general phone number is 312.281.6900.

Please check our website at smartchicagocollaborative.org or follow us on twitter @smartchicago for more updates.

We appreciate your patience during this time of transition.

CUTGroup #30 – Chicago Park District

Full room at CUTGroup test #30 of the Chicago Park District websiteFor our thirtieth Civic User Testing Group (CUTGroup) session, we tested the current Chicago Park District website. The main goal of this test was to understand the user experience in preparation of an upcoming re-platforming and redesign of their website. We wanted to understand how users currently navigate this website when completing tasks, specifically searching for information. We wanted to understand how users search for information and what improvements would make search and navigation easier.

Segmenting

We were interested in testing specifically with Chicago residents who had different experiences with the Chicago Park Districts and its website. Therefore, we asked our CUTGroup testers if they visited the Chicago Park District website before, and if so, for what reasons.  We were also interested in how frequent they visited parks over different seasons and the activities they enjoyed doing at the parks. We were also looking to have half of our testers test on their mobile devices, while the other half would test on laptops.

Screening Questions

During our initial call-out for testers, we heard from 70 CUTGroup members. Out of these respondents, we heard that 61 respondents (87%) used the Chicago Park District website in the past. 44 of these 61 respondents (72%) used it to find an activity, 41 (67%) used the website to find event information, 38 (62%) used it to find a park or facility, 31 (51%) used it to find a program, 12 (20%) used it to find employment, and 9 (15%) used it to apply for a permit or rental.

Out of the 70 respondents to our screening questions, 37 (53%) visit a Chicago park at least once a week during the Spring and Summer months, and 16 (23%) visit a park at least once a week during the Fall and Winter months.

We were looking to test with between 20 and 25 testers, and 21 testers participated. 7 testers tested on laptops, 6 on Android mobile devices, 8 on iOS mobile devices

Test Format

We conducted an in-person, moderated usability test where each tester was paired up with a CUTGroup proctor or a Chicago Park District staff member. Proctors helped guide testers to complete tasks on the website, observed how the tester navigated and used the site, and took notes on their experiences and ideas for improvements. One-on-one proctoring like this allows us to work closely with a tester to see if the website functions in an expected way.

Results

Contributions to this test were made by our CUTGroup proctors. Erik Hernandez, Peter McDaniel, and Nancy Simon all assisted with moderating the test. CUTGroup proctor, Nathalie Rayter, wrote the analysis report for this test. The CUTGroup Proctor Program trains once highly active CUTGroup testers to learn more about usability (UX) testing and CUTGroup test processes.

On February 8, 21 CUTGroup testers tested Chicago Park District website at the Near North library.

This presentation was shared with the Chicago Park District team and highlighted some of the top results from the test. 

Many testers had trouble indicating whether it was easy or difficult to search for information.

In total, testers were asked to complete 8 unique tasks in this CUTGroup test. There were some tasks that were easier to complete than others:

Easy & Challenging tasks

 

At the end of this CUTGroup test, we asked testers how easy or difficult it is to use the website overall; a plurality of 9 testers (43%) said it was “neutral,” either weighing the easy and the difficult tasks equally or identifying particular pain points that hindered their experiences. One of these pain points was the functionality of the website’s search functions. There are several possible changes that could be implemented into a redesigned site that would increase the ease of searching for specific information.

Broaden allowable key terms in the search box.

Throughout the CUTGroup test, testers experienced varying degrees of success in using the search box to complete their tasks. Testers liked the availability of suggested search terms, but there were multiple instances during test tasks when the first choice of the user would be to enter a key term and a ZIP code into the search box. These searches sometimes led to relevant information, but often did not — for instance, when tester Mia (#1) searched “dog park 60610,” but the returned results were confusing, and she could not identify the dog park nearest to her home.

Although ZIP code search is built into the “Find a Park” and “Find a Facility” functions, it is likely that Chicago Park District site visitor will continue to search using ZIP codes in the search box. A possible solution to increase the effectiveness of these search queries would be to prompt users who enter in geographic data like ZIP codes to instead access one of the aforementioned search tools. For instance, the results of a search of “dog park 60610” might be accompanied by a simple linked statement: “I see you’re searching for something near 60610. Have you tried searching by Park or Facility?” Directing users to take advantage of these more customizable search tools could reduce search times and help them find accurate information more quickly.

Increase the visibility of search filters.

Difficulty using search filters was one of the recurring challenges that testers encountered during park and facility search tasks. Notably, after completing the indoor pool task, 3 testers expressed a desire for filtering by descriptors like “indoor” and “zero depth entry”; they didn’t see that those filters are already present on the “Find a Facility” page. Search filters like these are currently present, nested underneath dropdown categories like “By Descriptor” or “By Facility,” which are closed by default. Additionally, these filter menus appear underneath the search headings, but they are not described as “filters.”  

Site users might experience greater ease in searching if the visibility of these existing filters were increased. One possible solution is to cue users to filter their searches by adding prominent, familiar language like “Filter search results” or “Advanced Search” to the initial search page. Another way to guide users might be to add the prefix “Filter by” to the categories. For example, use “Filter by Descriptor” instead of just “By Descriptor.”

Testers also recommended adding a more prominent search button, like an arrow or a “GO” button, adjacent to these filters so that users wouldn’t have to scroll down to submit filtered search queries.

Testers thought that information and search results could be more relevant to them with more options to search by location or audience.

Throughout the tasks of this CUTGroup test, testers encountered some difficulty in parsing what information was relevant to their tasks, whether based on location or audiences served. Here are some changes that could be made to improve the relevance of users’ searches.

Allow users to search for parks and facilities by address.

In the location search tasks, several testers experienced confusion over which of the returned search results were nearest to their intended locations even after they used a location filter to narrow their searches; this is likely due to the large footprints that many ZIP codes and community areas have. For example, during the park search, Twiller39 (#7) assumed that the first result from their search by community area would be the closest to the test location library simply because it was first.

Although ZIP code and neighborhood search exist, the results don’t sort based on proximity to a point within that ZIP code. Moreover, users don’t always know what their neighborhoods’ official boundaries are.

The most straightforward solution to this challenge would be to add a “Search by Address” function. Users would able to enter a fixed address into the search as a selector, and the search would return results that would be sorted by distance from that point. This would help alleviate some of the confusion over which parks or facilities were nearest to the desired location.

Another possible solution would be adding a location access request from the Chicago Park District website to the user. If a site visitor gave permission for the website to access their specific location, query results could be sorted by distance from that point by default; additionally, the map view of results could display a beacon to indicate where the visitor was access the website from. Clarifications like these could help visitors quickly obtain more relevant search results.

Eliminate or preempt search radius from denying search results.

There were several instances throughout the CUTGroup test when testers were unable to find meaningful results because of the radius of their ZIP code search. For example, while looking for the indoor pool nearest to his home, the initial combination of Regular Guy’s (#21) ZIP code and the “indoor pool” descriptors yield any results. During the task of finding a dog-friendly park nearest to their homes, 7 testers found that filtering results by their ZIP code or Community Area yielded no results because there were no dog-friendly parks in their areas, and they had to expand their search areas.

This process can be eased and even shortened by making changes to the way that ZIP code search works. One possible solution is to prompt users to expand the radius of their search. A ZIP code search with too small of a radius (the default is 1 mile) results in a page that says, “Your Zip Code search for “606__” returned 0 results.” The returned text could go on to recommend that users try expanding the radius.

Another solution is to eliminate the search radius entirely; in this scenario, when a user enters their ZIP code to search for a facility, the returned search results could include an estimated distance from the searched ZIP code and the ability to sort the results by distance.

Make Map view of search results more prominent

Across the three tasks in which it was tracked (Park nearest to the library, indoor pools, and dog-friendly park), 73% of testers’ searches were primarily conducted using list view; only 8% of searches used the website’s map view. However, particularly in the search for nearby parks, testers had difficulty identifying which of the listed search results were nearest, and tester Eddy (#13) even said that he needed a satellite view, apparently not realizing there was a map view option.

This navigational confusion could be eased if map view results were made more apparent. On the current website, the toggle between “List view” and “Map” is small and inconspicuous, above the search results and filter and below the heading. This text could be enlarged and emphasized to call users’ attention to the map option.

Another possible solution would be to guide users to the map by showing it to them right away. The default of displayed search results could be a list accompanied by a small map to the side, with the option to toggle to or expand the map.

Send or display customized information to users based on location and demographics.

When asked about the value a redesigned Chicago Park District website could have to them, 2 testers said it would be valuable to be able to get customized information about the most nearby and relevant parks.

This could be addressed by allowing users to assign a park or community area as their homebase. New users to the park could be prompted to “Choose My Park” when they access the Chicago Park District website for the first time, and this could indicate some of the links that would be displayed on the homepage, as well as featured search results. This information could then be retained for future visits.

Another possible solution is the creation of customized alerts. Right now, the current CPD email newsletter sign-up offers only a few choices for content customization. A future form might allow users to select several parks and/or audiences (such as families, teens, or adults) they are interested in receiving info about; users would then receive regular digests of updates pertinent to those selectors (perhaps automated by RSS aggregation).

Accessing the Chicago Park District website on mobile devices posed additional challenges to testers using their smartphones.

14 testers (66%) completed the CUTGroup test on a smartphone or tablet, and throughout the tasks, many of them encountered some challenges in using the Chicago Park District’s mobile website.

Improve mobile UX by ensuring responsiveness.

Some testers commented on the way the webpages of the Chicago Park District site displayed on their devices. Tester Hockey312 (#9) noticed that the mobile pages had to be resized throughout their navigation experience, including the Chicago Park District ActiveNet portal. Treasure (#2) was unable to locate an indoor pool near her home using her phone because the displayed information was “super small” and she had trouble zooming in to view it. Lauren1 (#19) said the menu bar took up a lot of space at the top of the mobile homepage.

The mobile user experience of the Chicago Park District website can be improved through ensuring that a redesigned website is fully mobile-friendly. One solution is to ensure that each web page is responsive to the device and browser it is displayed within; ideally, pages would not have to be resized by the user in order for them to make out the displayed text.

The mobile experience could also be improved by increasing the size of displayed text and reducing the page real estate currently occupied by banner images and menu bars.

Make Map view more mobile-friendly.

Several testers described difficulty in using Map View on their mobile devices. When searching for a dog-friendly park near their home, Angel (#8) found that it was difficult using the Map view of results on their smartphone because it was challenging to zoom in and out of the map display. Currently, when a user toggles to Map view in a mobile browser, the map does not immediately appear, and the user must scroll down in order to view the map interface. They must then use two-finger scroll to move across the map.

This experience could be improved by displaying the map immediately when a user selects Map view, either taking them to a new page or by automatically scrolling down to where the map appears.

Final Report

Here is a final report of the results:

Here is the raw test data:


CUTGroup #30 | Chicago Park District

 

CUTGroup #28 – Chicago Open Data Portal

CUTGroup Test in Progress on mobile deviceFor our twenty-eighth Civic User Testing Group (CUTGroup) session, we tested the newly redesigned homepage for the City of Chicago’s Open Data Portal. The Open Data Portal allows users to find various datasets regarding the City of Chicago. The City of Chicago Department of Innovation and Technology (DoIT) is working with Socrata to redesign the Open Data Portal, focused currently on the homepage, to be more user-friendly for users finding datasets and represent data and technology initiatives and applications created with open data.

The main goal of this test was to understand how testers who have some familiarity with the data portal (even minimum) respond to the changes made to the homepage. We wanted to capture how residents with different levels of digital and data skills search and what is the best structure for the homepage to make searching easier. Lastly, we wanted to see how responsive testers are to the other content that relates to the programs and tech initiatives of the City of Chicago DoIT.

Segmenting

On September 22, 2016, we sent an email to 1,172 CUTGroup testers who live in Chicago, and notification via text message to our SMS-preferred testers. We wanted to know if they would be available for an in-person test on September 28. When segmenting our testers, we were interested in testing on a variety of devices. We wanted to include testers who have used the Open Data Portal in the past and those who have never used the Open Data Portal before. We also wanted to include testers who have all levels of data experience to see how user-friendly the search functionality is.

Screening Questions

During our initial call-out for testers, we heard from 60 CUTGroup members. We asked how familiar CUTGroup members are with the City of Chicago’s open data portal and learned:

5 – Very familiar                  8% (5)
4 – Familiar                          22% (13)
3 – Neutral                            20% (12)
2 – Not very familiar     30% (18)
1 – Not at all familiar          20% (12)

23 out of 60 respondents (38%) had used the Chicago Open Data Portal before. 6 of those respondents specifically used the open data portal to search for crime data.

Test Format

For this in-person test, each tester was paired up with a proctor who was either a City of Chicago DoIT employee involved in the project or a proctor from the CUTGroup proctor program. Proctors requested testers to complete tasks on the Open Data Portal beta, observed the results, and took notes on the testers’ experiences and feedback.We also wanted testers to test either on laptops that we provided or their own mobile device. We tested with 6 testers on laptop devices, 6 testers on Android mobile devices, and 5 testers on iOS mobile devices.

Results

Contributions to this test were made by our CUTGroup proctors. Erik Hernandez, Peter McDaniel, Christopher Gumienny, Steven Page, and April Lawson helped facilitate this test.  CUTGroup proctor, Christopher Gumienny, also helped write a lot of the analysis report for this test. The CUTGroup Proctor Program trains once highly active CUTGroup testers to learn more about usability (UX) testing and CUTGroup test processes.

On September 28, and on a very rainy day, 17 CUTGroup testers tested Chicago Open Data Portal beta at the Woodson Regional Library located in the Washington Heights neighborhood.

This presentation was shared with the City of Chicago DoIT team that highlighted top results from the test. 

Testers believed that the redesigned Chicago Open Data Portal homepage is designed for the general public and residents, but many expected access to city services.

At the end of this CUTGroup test, we asked testers if they felt that they are the target audience for the new, redesigned Open Data Portal and 13 testers (76%) said “yes.” Our testers included individuals who had no, little, or some familiarity with the current data portal, and 14 testers mentioned that this site appeared to be designed for the general public and residents.

3 testers specifically said the Open Data Portal is designed for people with moderate to high technical savviness. 1 tester mentioned it was for people interested in data analysis. Another tester said it was built for business owners, while another said it was built for developers.

One concern is that testers, even after reviewing the website, still expected more access to city services and resources that could be found on the official City of Chicago website. There are some possible solutions that would either provide this general service information to residents or better define the purpose of the website.

Identify access points to city services or action steps for residents.

There are many different levels of creating these access points to city services that residents expected when first viewing the Open Data Portal. A very simple option is to add a link to the City of Chicago website in the navigation bar or footer of the homepage.

A second, more complex level is at the category level. For instance, if a user clicks on the “Events” category there could be a link to the City of Chicago events’ page before users even begin to explore datasets. This level of access would require reviewing all of the categories and understanding if users associate city services with those category types.

The most complex level is at the dataset level. Since we conducted usability testing primarily on the homepage and the action of searching for datasets and reviewing information, this would require additional user feedback. The suggestion is to connect relevant datasets to the appropriate city services. Therefore, if a user is reviewing data about potholes, as one example, there could be a link to make a pothole service request.

Utilize the header image to clarify purpose about the site.

Data Portal Beta Homepage Header

“What’s this sales thing? It makes it look like it’s advertising stuff,” GPBlight (#4) said in response to reviewing the header images. When the homepage first loaded, a bug was immediately noticeable because the header images would stack and cause the page to jump upon load. While that is a detectable fix, we identified a larger opportunity to share information about what the website does. Very few testers reacted to the initial “data sale” image and those who did react had negative responses. The first header image, before rotating to either the crime or Divvy data, should define the purpose to the site while being as clear and direct as possible. The current “data sale” language and images confuse some users.

Searching for datasets was not always intuitive when using search or categories and testers often had to try multiple times to complete a task.

We asked testers to search for six different datasets; one was open-ended, while the rest proctors requested that they find specific datasets: building permits, a map of crime data, recently fixed potholes, active business licenses, and Chicago’s community boundaries map. We were interested to see how testers would search and find the datasets and what issues they came across during the process. While 12 testers (70%) indicated that finding these datasets was “easy” or “very easy” we witnessed that it often took testers multiple tries to find the correct dataset and some testers could not find the proper result or chose a relevant, but not requested dataset.

Categories were often chosen over search bar, but technical bugs and design influenced those choices.

Only 1 tester out of 17 (6%) used the search bar consistently for all of the dataset search tasks, whereas 7 testers (41%) used categories to complete all dataset search tasks. The remainder of testers used both search and categories to find what they were looking for depending on the type of dataset we requested them to find.

On mobile devices, the magnifying glass of the search bar was not working properly. Therefore, if testers attempted a search nothing would appear and testers thought the search was broken or that there were no search results.

When doing the initial review of the page, only 1 tester mentioned the search bar, which indicates that it is not a prominent part of the page. When asked about what actions they knew they could do from reviewing their page, only 2 testers mentioned the search bar. When asked to take their first action, 3 testers (18%) searched whereas 10 (59%) clicked on a data catalog category.

Data Portal Homepage search/categories

We understand that a lot of these testers did not have much experience with the data portal and the data catalog categories are useful for exploration purposes. Nonetheless, we realized that the search bar was not very prominent for testers. Suggested improvements included larger font, higher contrast, and a more prominent location on the page.

Categorization needs to be more intuitive and filtering a higher priority in the user experience or search needs to be more flexible.

While testers knew where to find building permit and crime data, finding recently patched potholes, active business licenses, and community boundaries was more challenging. In one example, 10 out of 11 testers who found the active business licenses dataset had explored other categories first.

Testers expected to find the potholes data in these categories: Public Safety, Environment, and Sanitation. Testers expected to find active business licenses in Administration and Finance. Lastly, testers looking for community boundaries looked in the Community category.

Some testers did use the help text on the categories to make decisions about which category to choose, but it was difficult to decipher the order of results and whether a category contained the relevant dataset after choosing a category. Testers did not use the filters on the search results page, and upon reviewing a few datasets without identifying a relevant one, would often rely on search or finding another category. Providing users with an understanding of an order of the results and then the ability to filter easily to find the most appropriate results would be beneficial to this experience. Improving the filter functionality would make it easier to have datasets in multiple relevant categories and give users ways to find the data more quickly.

We also witnessed that testers did not always distinguish between the type of data– whether it was a dataset, data lens, filtered view, or map they were choosing. This was evident when we asked testers to find a map of crime data and testers chose the dataset or filtered view results that were higher on the results list.

Data Portal Search Results Page

A primary search method should be defined for the user. Currently, users can choose a category or search, but both have their challenges. Outlined above are results and improvements for the categories since that was a prominent way testers searched. For each task, 11 or more testers used the categories in completing their search. If the search box was more prominent on the page and functioned better on mobile, we could identify if search was a better way for testers to complete these tasks. We did learn that testers had difficulties with the search terms that they used and spelling errors caused stopping points in the experience. The bug on mobile also caused a stopping point that forced testers to use categories. If search becomes the primary method of finding datasets it should be flexible to account for spelling errors and find relevant resources.

Testers liked the “Digital Chicago” & “How to Videos” resources on the homepage, but the “Chicago apps” were seen as being the most relevant.

Testers reviewed the other sections of the homepage including the “Digital Chicago” section that showcases recent articles by the City of Chicago’s Department of Innovation and Technology, “How to Videos” that showcase how to use the data portal and other tools, and “Chicago apps” tools that use open data that let people better visualize data (OpenGrid) or is related to city services (311 service tracker).

48% of testers thought the “Digital Chicago” section was “relevant” or “very relevant,” but testers felt that “Digital Chicago” did not mean much to them (calling this section “news” could be an improvement), they had questions about the articles and what they meant, and this section lacked descriptive content for residents.

66% of testers thought the “How to Videos” section was “valuable” or “very valuable” and a lot of testers liked receiving information via video, although there was a consensus that they should be shortened and users should know that they will be directed to YouTube.

76.5% of testers were likely to use the apps under the “Chicago apps” section because they saw them as useful to have for residents. 15 of our 17 (88%) testers were not familiar with these apps prior to visiting the website. The “Chicago apps” section met expectations from testers who at the beginning searched for or were interested in city services. There was a positive reaction to these tools and increased the likelihood of using the Open Data Portal in the future.

Data Portal Homepage Chicago apps section

For the resident user, we would suggest placing the “Chicago apps” section higher on the page because it relates to their needs. To avoid the confusion in distinguishing between the Open Data Portal and the City of Chicago website, adding a descriptive sentence about what “Chicago apps” is and why it’s on this page would be valuable. Sharing that these tools are using open data would describe the importance of the open data initiative while giving residents tools that they could use in their daily lives.

Screenshot of misaligned layout on mobile devices

Next steps

Based on this CUTGroup test, work is already in progress to make changes that respond directly to our CUTGroup testers’ feedback. DoIT and Socrata are considering changing the Open Data portal’s layout to include:

  • Featuring the “Chicago apps” section higher on the homepage
  • Changing the design of the search box to be more prominent
  • Distinguishing between sections to make each section more apparent and separate from the next
  • Improving layout to be work better on mobile devices

DoIT is also thinking about creating shorter, more digestible tutorial videos for the “How to Videos” section. Finally, the banner will be reviewed and redesigned to be more user-friendly on all devices.

We look forward to future iterations of the Open Data Portal, and seeing how resident feedback was included in that process.

Final Report

Here is a final report of the results:

 

Here is the raw test data:

CUTGroup #28 Open Data Portal

CUTGroup #24 – OpenGrid

Our CUTGroup Proctor, Nathalie Rayter, wrote the final analysis report for this test. The CUTGroup Proctor Program trains once highly active CUTGroup testers to learn more about usability (UX) testing and CUTGroup test processes.

CUTGroup #24 - OpenGridFor our twenty-fourth Civic User Testing Group (CUTGroup) session, we tested OpenGrid– an open-source interface developed by the City of Chicago that allows residents to search for, interact with and visualize City of Chicago’s datasets.

“OpenGrid has an intuitive user interface that’s highly responsive and efficient. The application is accessible from any devices (e.g., mobile, laptop, tablet or desktop). It’s unique, functional, and an easy to use tool to find city data. It extracts city data and displays it in an illustrative form which makes it easier for a user to search and understand the information.”
– City of Chicago Department of Innovation & Technology (DoIT)

The City of Chicago DoIT conducted some user research and testing of OpenGrid before this CUTGroup test, but it tended to be with high-capacity users or City of Chicago staff. The DoIT department wanted to better understand how residents with different levels of data familiarity and digital skills might use OpenGrid.

“Programmers, however, make up a very small portion of the overall population of a city – as a platform grounded in open data, OpenGrid’s mission is to be accessible by anyone who wishes to learn more about their city…

CUTGroup’s process provides a vital link between developer and user that the civic tech world sometimes lacks.  It’s one of the most human crowd-sourced enhancement mechanisms that a civic app developer could have in her toolkit.”
–Sean Thornton, Program Advisor for the Ash Center’s Civic Analytics Network and writer for Data-Smart City Solutions

Segmenting

On April 12, we sent out an email to 1,132 CUTGroup testers who live in Chicago. We wanted to know if they would be available for an in-person test on April 20, 2016. When segmenting our testers, we were interested in including testers who described themselves as not being very familiar with using datasets in their personal or professional lives. We also wanted to include testers who had varying degrees of familiarity with what was happening in their neighborhoods. Lastly, we wanted to include testers with different types of devices.

Screening Questions

During our initial call-out for testers, we heard from 98 CUTGroup members. We received a lot of good information just from the screening questions.

  • 38% of respondents used the Chicago Data Portal before
  • 38% of respondents use or work with datasets in their professional lives, but only half of these respondents said they used the data portal in the past

We were looking to include 20-25 testers for this test. We wanted to test with half of them on laptops and the rest on a variety of mobile devices.

Test Format

For this in-person test, each tester was paired up with a proctor who at this time was either a City of Chicago DoIT employee or involved with the project (this test happened before we formalized a proctor program). Proctors guided testers through doing multiple tasks on OpenGrid, observed the results, and took notes on the testers’ experiences and feedback. This test also utilized A/B testing, where 13 of the testers (“A” Testers) began on http://opengrid.io/ (OpenGrid’s homepage) and the other 10 testers (“B” Testers) on http://chicago.opengrid.io/opengrid/ (the app interface). We also wanted testers to test either on laptops that we provided or their own mobile device. 

"A" testers reviewed this informational homepage

“A” testers reviewed this homepage.

"B" testers began directly within the OpenGrid app

“B” testers began directly within the OpenGrid app

Results

On April 20, we tested OpenGrid with 23 CUTGroup testers at the Chicago Public Library Legler Branch located in the West Garfield Park neighborhood.

This presentation was shared with the City of Chicago DoIT team that highlighted top results from the test. 

Homepage

Through the A/B format of this test, we learned how important the context from the OpenGrid homepage is. Testers who started directly in the app were typically more confused about next steps or misunderstood what OpenGrid does. We did hear, though, that there is a lot of information on the OpenGrid homepage, and parts of the homepage spoke to different audiences (especially technical audiences), which made testers feel that this website was not necessarily targeted to them.

“I wouldn’t understand this unless I was in technology. ‘Open source’ is definitely for the ‘tech folks’ and does not matter to me.” -Yoonie, #A18

7 out of the 13 “A” testers who started on the OpenGrid homepage said this website was targeted to Chicago residents, whereas, there was much less consensus about who the target audience is among testers in Group B.

Testers who started on the OpenGrid homepage typically clicked “Launch” (7 out of 13 testers clicked on this button) as their first step. “B” testers did not have a clear sense of what to do first, and therefore, did more exploratory actions to learn more about what OpenGrid does.

Ease of use

As the test designer, I was not prepared for how difficult some of the tasks would be to complete on both mobile and laptop devices. Test sessions typically lasted over an hour, and the majority of testers said that the tasks were “difficult” or “very difficult” to complete. We were lucky to get great, actionable feedback from our CUTGroup testers, which I translated to open GitHub issues on the City of Chicago’s repository.

Overall, how easy do you think it is to use the OpenGrid website?

Group A
5 – Very easy 0%
4 – Easy 23% (3)
3 – Neutral 23% (3)
2 – Difficult 31% (4)
1 – Very difficult 23% (3)

Group B
5 – Very easy 0%
4 – Easy 20% (2)
3 – Neutral 40% (4)
2 – Difficult 10% (1)
1 – Very difficult 30% (3)

We heard a number of improvements that the OpenGrid team could make to improve searching for, finding, and filtering datasets. 13 testers recommended that changes should be made to the OpenGrid search tools to make it easier for users to find the information they are looking for. 7 of these testers thought that the search bar should respond to addresses/zip codes or tags like “311.” 3 testers commented that the existing search filters are too complicated and that they should be simplified. Paloma (#B8) says, “Commonly used queries was easy; all other filters made it difficult.”

7 testers mentioned that they would improve the descriptions and instructions that orient the user to the OpenGrid interface. 2 of these testers recommended adding instructions that suggest how a user would interact with the page, such as suggesting a search or having sample questions. For example, Cleveland54 (#B3) says, “The advanced search panel should be pushed down with a note at top saying ‘welcome, you can search blah blah here.’ The landing page should have a description of the website and then launch to the site.”

We also saw that there needed to be a stronger connection and responsiveness between the filters and the map. Testers were making changes to their search criteria in the advanced search panel and were expecting to see those changes on the map. We also saw that testers would move the map, not create a map boundary using the tool, and then use advanced search expecting that the advanced search panel was connected to their map view and they would get results in that location.

Language

The biggest theme we saw throughout this test was that testers faced numerous challenges with the language on OpenGrid. From the homepage review, where testers thought that the language was targeted to the tech community, to the search and filtering functionality, the biggest improvement that could be made to OpenGrid is including more accessible language.

“It needs to be user-friendly for all Chicago citizens – terms like ‘geo-spatial filters’ don’t mean anything to most users. It sounds deep, but gets people nowhere. This isn’t Star Trek over here.” -Renee54, #A4

Testers also faced challenges viewing the dataset, and not always easily being able to distinguish what the field types meant. Searching required testers to know the exact field name when filtering down the results to more relevant information.

Our biggest recommendation to the OpenGrid team was to take time reviewing the site, and incorporating plain language to content, datasets, and functions.

Next Steps

Once the CUTGroup test was completed, we updated the City of Chicago’s GitHub repository with all pertinent issues that represented the top challenges our CUTGroup testers faced. The City of Chicago DoIT continues to work in the open– inviting developers to participate in open calls, sharing notes on status updates, and documenting their current and future work.

The recent OpenGrid v.1.2.0 release addressed issues that directly came from our CUTGroup testers:

The newest version contains improvements to OpenGrid to make it easier to use. The latest release contains friendlier language, an improved user interface to highlight more important features while deemphasizing more technical options, and reducing the number of mouse clicks to see data.

We are excited about opportunities through the CUTGroup to do usability testing on more data-focused websites and applications. While a lot of testers found OpenGrid difficult to use, out of 11 testers who said “Yes,” they liked OpenGrid,  appreciated that they were able to access new information and data that they were not aware of before this test. This test OpenGrid is an example of how we can continue to learn about how residents interact with data and their potential needs in using open data and then create better user experiences around understanding and using data.

Final Report

Here is a final report of the results:

Here is the raw test data:

CUTGroup #24 - OpenGrid

CUTGroup #21 – Digital Skills

CUTGroup #21 Focus Group SessionFor our twenty-first Civic User Testing Group (CUTGroup) session, we conducted focus groups to have conversations with residents about their access to digital skills trainings and resources. We wanted to see if residents know about the the resources in their own neighborhood and how they prioritize gaining new skills that center around technology.

This was a different topic for a CUTGroup test, but as we build more technology, we saw incredible value in talking to people about their digital skills. From this test, we wanted to understand

  1. How people talk about digital skills in the context of their lives and goals
  2. How much they prioritize improving their digital skills
  3. If they know of resources available to them or have used them
  4. How easy or challenging it is to access or take advantage of those resources
  5. Challenges that people face when it comes to accessing the Internet and technology and getting to their goal

We wanted to use this information to shape the new Connect Chicago website and gather some qualitative information on how Chicago residents think about and deal with these issues. Connect Chicago aligns citywide efforts to make Chicago the most skilled, most connected, most dynamic digital city in America. The Connect Chicago network includes more than 250 locations offering training, devices, Internet access, and helping residents engage with technology. Denise Linn, Smart Chicago’s Program Analyst, runs the Connect Chicago initiative and was key in designing this test, writing questions and helping take notes during the sessions. 

This in-person test took place at Literacenter, a collaborative workspace dedicated to literacy organizations, located at 641 W. Lake Street. We chose to test here because it is a comfortable and flexible environment for testing and Smart Chicago is a member of Literacenter!

Segmenting

On October 20, we sent out an email to 941 CUTGroup testers who are Chicago residents. We wanted to know if they would be available for an in-person test on October 28 for about 45 minutes. We asked screening questions to gather information about how comfortable people felt using technology, whether or not they participated in digital trainings, and what types of skills they wanted to learn.

We looked through all of the responses and wanted to choose testers who did not have advanced digital skills. This meant not selecting testers who had coding skills, had advanced technology related degrees, or used sophisticated software systems for work or personal use. We wanted to reach people who had lower skill sets and might be interested in additional trainings or resources to improve their skills. We also thought testers would be more comfortable if they were grouped with others who were close to their own skills level. 14 CUTGroup testers participated in our focus group sessions.

Responses to Screening Questions

71 testers responded to our screening questions. Here are a couple of things we learned:

  • 28% of respondents said it is “Challenging” to use technology or learning new skills
  • 94% of respondents “Agree” or “Strongly agree” to this statement: “I feel comfortable using computers & technology” and the skills mentioned ranged from using email to coding.
  • 96% of respondents “Agree” or “Strongly agree” to this statement: “I want to learn new computer & technology skills”
  • Only 42% of respondents “Agree” or “Strongly agree” to this statement: “I am familiar with where computer & technology resources are in my community”
  • 70% of respondents participated in a computer or technology training class or program

Test Format

When designing this test, we chose to conduct focus groups. We were worried that conducting one-on-one interviews, we (as interviewers) would be influencing the responses and we were interested in participants talking with one another about their experiences. We thought there will be a lot to gain from those interactions that was worth the risk of participants influencing each other. For example, group interactions could capture a sense of community expectations about technology resources as well as the language and framing testers use while conversing about digital skill-building in Chicago. As the moderator, I played the role of asking questions, ensuring everyone had a chance to talk, and keeping the conversations away from being negative.

Before the focus groups, all testers completed a pre-survey questionnaire about the technology tools that they used. This helped us capture individuals responses before conducting the focus groups. We opted to ask many of the more personal, targeted questions about skill levels during this individual pre-survey questionnaire so as to formally capture that data and avoid putting testers on the spot during the focus group. For the focus groups, I used this script to guide the conversation, although we asked additional questions depending on the conversation.

Results

Pre-survey

In the pre-survey, we learned what technology testers are comfortable using, what they want to do better, and what skills they are or are not interested in learning. All of our results from this pre-survey can be found here.

We learned that testers felt most comfortable with these tools and skills: emails, creating a text or slideshow document, search engines, shopping online, and using Facebook and other social media outlets. We also learned that testers wanted to learn how to (better) do these things: creating a spreadsheet, using data visualization software,  or learning how to code.

Focus group #1

Our first focus group had 5 testers and we began the conversation with how these testers use technology in their own lives either personally or professionally:CUTGroup Test Discussion on Digital Skills

Tester #16, “Graphic Artist,” shared that he uses a laptop to do banking online and used to be in the graphic design industry and sometimes freelances but hasn’t learned the recent graphic software versions.

Tester #15, “techgeek,” uses mobile delivery apps like GrubHub and Caviar.

Tester #12, “GF,” works for a Chicago River kayaking company and uses Apple products for work, but uses Windows products for personal use and collaborative tools like Google docs help them transition between those platforms.

Tester #17, “Nonchalant,” responded to this question with “My job is to go to school!” but mentions that he checks his emails frequently.

Tester #14, “Rogers Park,” told us that he works in retail so he doesn’t use technology much for work, but appreciates this because he can interact more with people. Outside of work, he uses the Internet to manage their bank account, finances, and retirement funds and “stares at social media.”

Family networks rely on one another to teach digital skills.

Our conversation with this group focused a lot on the topic of how we use technology in a family setting. This first started from Tester #14 who does not appreciate that everyone is always connected in their family and we continued with how the other testers see technology being used in their families. While we heard a lot of experiences of technology creating a disconnected feeling because family members were on their own devices, we also learned about how teaching technology was a family activity.

Tester #17 helped his dad understand Facebook and “it was hard.” Now Tester #17’s dad is on Facebook and tags him “20 times a day!” A few testers shared their experience about teaching their parents to use social media, but two testers also had parents who had more advanced technology skills, like Tester #12’s mom building her own computers, and taught them a new skill.

Tester #16 bought his father an IBM computer, then an iMAC computer, and tried to teach him how to use the Internet so the family could contact him in Puerto Rico more easily. Tester #16 thought teaching his father was extremely challenging and that it might be “too late” for his father to learn how to effectively use this hardware.

Challenges in learning new technology derives from a feeling that technology is always changing, and maybe it’s changing too quickly to keep up.

In every focus group we conducted this evening, we wanted to talk about the challenges in learning new technology. While not all testers were at the exact same digital skill level, this group saw the challenges in learning digital skills as keeping up with new technology, remembering what they already learned, and devoting time to learning. “You have to catch the train,” described Tester#17 when talking about the speed of changing technology.

A few testers talked about Excel specifically as something that is difficult to learn. Tester #14 shared that “Memorizing formulas is hard. I really want to learn Excel…it seems so simple, but it’s not.” Tester #15 said that learning Excel was very challenging in school and said that if they had “someone right there,” some human interaction, then learning would be easier.

Tester #12 uses online coding courses, ex: Code Academy, to try to learn how to code but says that it is challenging to a complete a lesson and then go back to the series later and remember what they learned beforehand. “I wish there was a classroom experience like that for adults.” Tester #12 expressed a preference for learning technology skills in a classroom setting: “If I had a teacher, I could learn how to code.”

We continue to hear how much testers value human interaction when learning new digital skills and technology courses and saw an in-person class or instruction as necessary to be successful.

Focus group #2

Our second focus group also had 5 testers and we started again with how these testers use technology in their own lives either personally or professionally:

Tester #24, “Ready to learn,” said that at work they are on their PC using Excel and Outlook. Outside of work, she goes to “fun” websites, researches on Google and sometimes uses Facebook.

Tester #22, “Like to discover useful tools,” shared that they use the same tools in the same way in their personal and professional lives. “My work doesn’t require a lot of complex calculations.”

Tester #21, “I love learning,” uses eBay and Microsoft products.

Tester #25, “Not Dead Yet,” uses Microsoft, Google, and some communications software at work. While at home they read on their Kindle, pay bills online and manages other finances with technology, and play games on the Internet.

Tester #27, “Involved,” says he does data entry for work and mostly uses Excel. At home, he uses Word and websites like Google, Youtube, and Amazon.

There’s not always a clear technology goal, but keeping up is important and the format of instructions or resources might depend on what they want to learn.

Unlike the first focus group, this group did not share clear technology or digital skills goals. When asked what they wanted to learn, no software platforms or hardware was mentioned specifically. We did hear from some that testers wanted to “keep up” either with their children and family, or with job-related technology skills.

Tester #25 said his goal is to go paperless at home and is in the process of using different tools to scan all of his documents and receipts into one place to manage his finances.

When I asked testers about where they would go to learn new skills, Tester #22 shared that “You can learn almost anything on YouTube.” The example used was when you get a new phone, you can see videos online on how to use them from opening the box.

Testers #21 & 27 would rather go to a class because they prefer person-to-person contact and wants to ask questions. Tester #21, somewhat jokingly, added that “I have kids. They are my personal tutors.”

Tester #24 explained that if they need to know something quick or “one-off” (like fixing something) she would Google it, but if she wanted to learn a whole new skill or system, then a class would be best.

Technology classes could be organized around common problems, not tools.

When we talked a bit about ideal technology classes, we heard that some testers were interested in classes being organized around common problems that people experience which could be solved by technology or computer skills. Tester #22 brought up this idea and thought that this would be practical for class recruitment. Here were some ideas: “Make a will,” “Collect and organize recipes online,” or “How to go paperless at home.”

Tester #21 agreed that this approach is more emotional and personal to prospective testers.

Focus group #3

Our last focus group had 4 women testers, who had more advanced skill sets than we saw in the other groups. We started again with asking how these testers use technology in their own lives either personally or professionally:

Tester #37, “Almost Advanced,” said that at home she use she telephone, TV and tablets, and at work they use tablets. Tester #37  took a Microsoft Excel class at Association House.

Tester #32, “Striving for literacy,” said that they use completely different devices at work and at home because of company policy. Tester #32 works at Motorola and they have to separate their personal online activity from their work online activity. She says she is  slow to adopt in their personal life, but at work, she is eager to learn new tools: “If it’s at work, I want to learn and improve.”

Tester #31, “Recent Upgrade,” said that they had a similar situation as Tester #32, and observed from other answers that people often don’t adopt new technology, but they are pushed into it, especially at work.

Tester #34, “Reluctantly Tech Addicted,” said that they try not to use technology at home at all. They spend over 40 hours a week in front of a screen for work and don’t want to add to that. Tester #34 commented that there’s always new technology to learn at work and often work provides poor training without a reliable person to turn to for questions.

There were uncertainties when we asked testers if they consider themselves “tech-savvy” and those answers sometimes changed when they heard from others.

In the screening questions, we asked testers “When you think of the most tech-savvy (or technically advanced) person you know, what can they do that makes them so good at using technology?” We were interested in what ways is a person technologically savvy and if that because of the tools they use, the skills they have, or the general comfort level they have in learning new tools. Some responses from the screening questions included knowing how to code/create a website, being able to use different hardware, learning new skills quickly and then being able to teach others, technology coming as second nature, or just based on experience.

For this focus group, I specifically asked this group if they considered themselves “tech savvy” especially since we had a group of all women who based on their screening questions we did not think were as advanced in their tech skills. I was not sure if that was because they did not rate themselves as high in describing their skill set or if there was another reason. 3 out of 4 of these testers said they were “tech savvy,” but tester #37 said they thought they were tech savvy, but listening to other experiences, she changed her mind and said she wasn’t tech savvy. “I know the basics – Microsoft.” Even before she did the focus group her tester profile name was “Almost Advanced” and she participated in multiple digital skill trainings. The other women in this focus group immediately jumped in saying that having skills in Microsoft Office is not basic.

Later in the conversation when we discussed taking classes to learn new digital skills, Tester #37 said she took a basic computer class at an organization near her home even though she knew Microsoft Word and some of the skills they were teaching. I asked why she chose a class when she knew the software that was being taught, “Why start at the beginning?” Tester #37 shared that certifications are important because it impacts how much you get paid. She also shared that “I figured there might be something I didn’t know,” and she received a free laptop after taking the course.

Connecting residents to resources

Our final goal for this CUTGroup test was to understand how we can better connect residents to technology resources in their neighborhood.

The majority of these testers were interested in taking in-person courses where they could have personal support. Taking an in-person course over an online course, however, was dependent on the subject matter. Testers described many resources that could be found online but online resources are mainly useful for learning a quick skill (fixing something quickly) not an entire skill set like learning a new software platform.

Testers are looking for free classes that are in their neighborhood, and not everyone is aware of the resources that are near them. In the second focus group, we discussed how free classes are generally basic classes and there are not as many intermediate or advanced courses available for free.

Connecting residents to resources is dependent on marketing of those resources. In our last focus group, we talked about cross-collaboration between organizations. If you participated in  a class at one organization, those staff could and should help determine the next class you should take based on your new skill set. This extra guidance is key to driving learners to continue their trainings.

Testers  are not sure how to rate their skill set and need guidance in determining if a new class is right for them. As an organizer and designer of this test, I found it is hard to rate the skill levels of others and rating your  own skill level is even harder. Guiding residents to know what is the best class for them is extremely important and can be done in multiple ways:

  • Digital skills certifications provide a structure towards the next step in the learning continuum
  • Instructors or trainers at organizations can provide  better information to their students on next steps even when that class is outside of their organization
  • Showcasing what benefits come from  learning digital skills whether that is progression in skills or work-related can encourage new residents to participate
  • Being transparent in the class documentation and syllabus will allow residents  to review and determine if the class is too advanced or too easy based on what they already know

At Smart Chicago, we are excited to incorporate the ideas of this CUTGroup test into our Connect Chicago project and create news ways of talking about digital skills trainings and finding ways to help residents learn technology to improve their own lives.

Other Documentation

Here is a link to the notes from our focus groups that shares all of the topics we discussed during this test.

Here is the raw data of the pre-survey results:

Here is a link to our photo album from the test:

CUTGroup #21: Digital Skills

CUTGroup #17 – Ventra Transit App

Ventra-CUTGroup-TestFor our seventeenth Civic User Testing Group (CUTGroup) session, we tested the Ventra Chicago mobile transit app. The Ventra app allows riders to manage their Ventra account, buy mobile tickets for use on Metra, get notifications about their account, and other features.

This was an exciting opportunity for our CUTGroup testers to be the first to test the new app and provide feedback that could affect many public transit riders. This is a snippet about the test from the Chicago Tribune:

The CTA, Metra and Pace said they are working with a consultant with expertise on testing apps, Smart Chicago Collaborative’s Civic User Testing Group, to iron out undisclosed issues with the Ventra mobile-ticketing app, which is designed to let commuters purchase fares and manage their Ventra accounts from their smartphones.

“We get to make a first impression once, and we want the Ventra app to make a great first impression on CTA, Metra and Pace customers,” said Michael Gwinn, CTA director of fare systems.

This test had two parts:

  • A remote test where CUTGroup testers downloaded the beta version of the app and then used it in their daily lives
  • An in-person test where testers gave direct feedback and performed specific tasks in-app.

These two parts allowed us to collect information about app usage away from the test environment, and also have great conversations with our testers about what was or wasn’t working for them.

Here is what Tony Coppoletta, Manager, External Electornic Communication of the Chicago Transit Authority said about what the Ventra app team wanted to learn from this CUTGroup test:

The CTA has an incredibly varied set of use cases and we look forward to how all our riders can benefit from this app’s feature set. In addition to some public outreach to invite people to test our app before public release, we want to try and learn from the diverse and experienced testers the CUTGroup offers would be an excellent addition to our real-world user testing of the app. The focus is to trap bugs, ensure transactions happen as expected, good usability and a solid UX overall.

Segmenting

On June 28, we sent an email to all of our 880 CUTGroup members inviting them to complete an initial questionnaire about this new test opportunity. We wanted to know if they would be able to participate in a remote test component between July 7 and 15, and then participate in an in-person session either on July 15 or July 16, 2015. We asked a lot of screening questions to gather information.

We wanted to know what device they had, what software version they were using, and whether or not they had a Google Play or iTunes account. This was important, because they would have to be able to download the test version of the app. We also asked a lot of questions about using Ventra such as whether they had an account, if they load passes or value, if they use public transportation and what modes of transportation they use.

Screening Questions

During our initial call-out for testers, we heard from 91 CUTGroup members. We received a lot of good information just from the screening questions. This is what we learned about their main modes of public transit:

95% of respondents use CTA trains
90% of respondents use CTA buses
44% of respondents use the Metra
25% of respondents use Pace

We were looking for around 30 testers for this test, and wanted about half of the group to be Android users and the other half  iPhone users. We were also interested in getting a mix group of Ventra users and testers who ride different modes of public transit (CTA buses, trains, Metra, and Pace users).

Test Format

This test was the first time we told testers in our initial call-out email what app we would be testing. Normally we do not disclose this until the time of the test to avoid having people visit the website and app before the test.  However, we had a lot to tell our testers about the instructions and updates they were going to receive from Ventra about downloading the test version, so it made sense to give them all of the information upfront. We also had to make it very clear that testers were going to use or create a real Ventra account and make real purchases.

We offered testers a total of $40 for helping us, instead of the normal $20 per test. The reason for doing this was because they were doing two big parts  1) downloading the beta version and using it remotely, and then 2) participating in the in-person session. We respect our testers’ time and feedback, and we understood how worthwhile it was to have them participate in both pieces of the test.

Once testers were selected for a test, we emailed them to check their availability for the in-person session, and gave them instructions about how to get the beta version on their devices. Here is a part of the email we sent:

Thanks for your interest in our CUTGroup test of the new Ventra mobile transit app. You have been chosen to participate in this test. Here are some details about next steps:

  • If you have an iPhone, please download the TestFlight app from the iTunes store. You will need this app before you can download the Ventra app.
  • If you have an Android device, please make sure you have a Google+ account. You will need to join Google+ before you can download the Ventra app.

Early next week you will get an email invitation directly from Ventra to download the app. Even though this is a CUTGroup test, the fares you buy in the app are real—when you you load fare or tickets in the app, your chosen payment method will be charged and the fare will go into your Ventra Card’s transit account or toward real mobile tickets for Metra. Also, you will receive emails from Ventra giving you extra info or updates.

We understood that by asking testers to download the TestFlight app or get the beta version through their Google+ account that this would be adding extra steps that our testers might not be familiar with. We communicated regularly with testers to make sure they were able to download the app and to assist if any trouble arose. Only 1 tester could not download the app remotely, but was still invited to participate in the in-person session.

Remote test

Testers were invited to start using the app in their day-to-day as soon as they received the app invitation. We sent testers questions asking them about their experience, but then they were also invited to submit to Ventra’s bug report form directly, which some testers did. From the remote test, we wanted to gather an understanding of what the testers did first (create a new account, login to an existing account, etc), whether or not they added value, while also getting some feedback about their experience,  how easy they thought it was to use, and ways the app could be improved.

In-person test

After the remote test, we held two in-person sessions at The Chicago Community Trust on July 15 and 16, 2015 to have more in-depth conversations with testers about their experiences and to watch how testers completed specific tasks such as use “Trip Tools,” register a new card, purchase a Metra ticket. We asked testers to bring their own devices to complete part of the test, but we also had test devices (both Android and iPhone) available to test making purchases or creating a new account. By incorporating the test devices, we were to test more tasks, but we understood that testers might not feel as comfortable using these new types of devices.

Results

Remote Test

27 testers completed Part 1 of the CUTGroup test, which was a remote test where testers used the Ventra app in their normal day-to-day routine. During the remote test, only 8 testers added CTA transit value or passes to their account, while only 5 testers purchased Metra tickets. Here are some comments from testers who did not make a purchase in the remote test:

ChgoKC says, “I tried to add value to the account, but it asked for my Ventra password, which I entered (it’s 4 digits), and it said the password had to be 8 digits. I tried my password for my bank account, but that was ‘incorrect.’ So not sure what password it’s looking for.”

Gr8fl4CTA said, “I haven’t made a purchase because I have a monthly pass. I do not like that you can only add $5 to the card. There should be an option to either fill out the amount-especially since most rides are $2.25 or $2.50. People shouldn’t be forced to put $5 on their card if they don’t need all of that.

Here are some responses from testers who made a purchase during the remote test:

CTA/Metra Commuter/Evil Genius says, “Purchasing in general is easy. Multiple times, I’ve purchased a ticket while walking to the train. I love the ability to pay with my Ventra auto-reloading balance.” 

Frequent CTA / Occasional Metra says, “Purchasing the Metra tickets was very easy – I only purchased one-way tickets so far using the app, but it seemed very straightforward in choosing which stops and identifying the ticket type I wanted. The conductor that checked my mobile ticket on UP-N was educated on the system and he had no trouble accepting the ticket; that was my only concern about using the Ventra app to buy a Metra ticket.”

Systems engineer CTA rider says, “I added a 30 day pass to an existing transit account and it went very smoothly. I’m also REALLY, REALLY impressed with how split tenders for payments were implemented. It’s really intuitive and I really like that feature.”

Three testers mentioned that sometimes the app was slow to load. Additionally, 3 testers either expected or wanted Near Field Communication (NFC) capabilities. Otherwise, testers had a wide range of responses when describing their overall experience using the app. Here are some improvements we heard:

  • Show password
  • Be able to transfer value from one card to another
  • Improve the to back button so it does not kick the user out of the app
  • Route planning in Trip Tools

When asked about the visual design of the app, the majority of testers (67%) thought it was “appealing” or “very appealing.” Here are some useful responses from testers about visual design and how it might be improved: 

CTA rider AC would change the color of the cursor from white to black or another darker color. CTA Rider AC says “It is really annoying that I can’t see where the cursor is.”

Occasional CTA rider says, “The alerts in the Trip tools are a very light font and not very easy to read.”

When asked about how easy it was to use the Ventra app, 22 testers said it was “easy” or “very easy” to use.

In-person Session

26 testers came to the in-person sessions (14 came on Day 1, while 12 came on Day 2). Two of these testers did not complete the remote questionnaire, but still provided great feedback during the in-person section.

CUTGroup17-Ventra-Test

Trip tools

There was a lot of conversation around Trip Tools and ways to improve this feature. A lot of testers saw this as a feature that would get them to use the app on a regular basis. Testers shared with us by using the phrase “Trip Tools” they were expecting more route planning features similar to using Google Maps. For example, one tester specifically called out wanting to see a map during this part of the test. Testers thought this feature would help them choose bus and train routes to get to locations, not only tell when the next train or bus is coming. Changing the name of this feature might assist with people’s expectations of the tool.

9 out of 26 testers found the Trip Planning feature to be similar to or on par with other websites or apps that they use. Here are some responses from testers:

Brown line rider / occasional UP-N Metra rider says that this is “Similar to what you’d get on the CTA website only this app looks more this decade.”

CTA rider (KB) says, “It’s good, has about the same functionality that many other apps have, and the alerts here are easier to find” because they are shown more prominently.

9 out of 26 testers thought the Trip Planning features on Ventra were better than other websites or apps they currently use.

CTA rider (AC) says, “I trusted it more because it was affiliated with Ventra.”

Daily CTA Rider (MW) says, “Actually easier than my other apps! And I think that may be because of simplicity and consistency on the screen and not use of a lot of different colors.”

6 out of 26 testers thought this feature was worse than other tools that they use. Here are their comments:

Frequent CTA / Occasional Metra says, “This is stop driven. I think in terms of routes. The other apps think in terms of route and not stops. Other apps locate you on a map and tell you what is around. Helps you find the bus and route you need to take.”

Riding the 20 every day says, “It’s similar, but the lack of map function is a problem.”

Metra tickets

Testers were asked to purchase Metra tickets on the test devices. 31% of testers said that they travel on the Metra once or twice a month. Out of our group, 5 testers (19%) had never used the Metra before. Here are the top things that testers liked about the process of purchasing a Metra ticket.

  • Split payment
  • Using transit value as payment
  • Billing info saved
  • Stops are in order of location on the line

A recommendation for the Ventra app team would be to test the Metra ticket functionality with residents who use the Metra as their primary form of public transit. Since our testers live in Chicago, most of them tend to rely on CTA busses or trains.

Overall

24 out of 26 testers (92%) said they liked the Ventra app during the in-person test due to how easy it to use (7 testers mentioned this), the Metra functionality (7 testers), the convenience (5 testers) and the visual design (3 testers).

Brown line rider / occasional UP-N Metra rider said that this app is going to replace the Ventra website for him.

Metra rider88 says, “It was really convenient!” Normally Metra rider88 has to go down to the vending machine at the train station to add value via credit card and this can be very inconvenient due to the lines that form.

Frequent CTA / Occasional Metra says, “I will ride Metra more often because of the app. I wouldn’t have usually considered it as an option.”

Blind Transit rider says, “It has the promise to provide me with access on the go to the information my Ventra pass has. [This app] can be more accessible and useful than any other available option as a blind transit rider.”

18 testers (69%) said that they believe the Ventra app fits into their day-to-day routine. Here are some responses from testers who said, “Yes” the Ventra app fits into their day-to-day routine:

ChgoKC says, “It’d be similar to the parking apps, where you’d keep it around and put on the home screen of your phone.”

Mischievous Metra Maverick rides the Metra daily and says this “eliminates the need to look for a soggy ticket that is ruined halfway through the month.”

Geek Sheek says, this will probably be “one of his top five apps.”

Frequent CTA Rider JH, Occasional CTA Rider and CTA Savant  do not think that the Ventra app fits into their routine because they are looking for use of NFC technology so that they can use their phone at the turnstiles to get on trains or busses.  In addition, not all testers might use this app because their transit value gets loaded automatically and they might not ride the Metra or add additional value regularly. Gr8fl4CTA is not sure if she will keep the Ventra app because the trip planning does not work well enough, while TA is not sure because she knows her daily value and would not need to use trip tools often. TA says, though, “It is handy for adding a value and getting a pass.”

Updates

When we are onboarding new developers for a test, we emphasize the importance of the CUTGroup motto: “If it doesn’t work for you, it doesn’t work”.  Sometimes it can be a challenge for developers to invest the time to actually make changes based on feedback. That was not the case here. Here’s some of the updates they’ve told us they’ve made or are working on based on the feedback from testers:

  • “Trip Tools” has become “Transit Tracker” to more accurately represent what this feature does. The addition of being able to do trip planning is something that the Ventra app team expects to do in the future. Also as soon as the user clicks on “Transit Tracker” their “Favorites” and “Nearby stops” are right at the top of this page
  • Accessibility continues to be a priority, and one CUTGroup tester, Blind Transit Rider thoroughly helped the Ventra app team with their app’s accessibility functionality
  • There is now a “Show Password” button to help people enter their passwords, and the cursor is now a dark blue that helps testers know that their cursor is in the correct field. The Ventra app team also wants to make the password requirements more prominent when creating an account
  • The back button will be improved in order to be more in line with what Android users expect
  • More information is going to be added for Metra in the Transit Tracker feature to give better information about when the next train is arriving

Tony Coppoletta, of the Chicago Transit Authority, said

Our work with the CUTGroup has proved to be an incredibly valuable experience as part of our test plan for the Ventra app—both through the thoughtful feedback we received via the remote test and in affording us an opportunity to sit down face-to-face with a diverse range of riders of CTA, Metra and Pace and learn about users’ experiences together combining an open dialogue and structured testing.

CUTGroup is now a community of more than 1,000 residents in Chicago and all of Cook County who work together to make lives better through technology. This test has been an example of how these testers can be an integral part of changes to technology. It is exciting to see changes being made based on the direct feedback from testers.

CTA Rider says, “I love being able to help contribute to the development of this product.”

Route 66 Book Stalker liked testing something that is “important and impacts a lot of people.”

Final Report

Here is the final report with a look at all of responses for each question that we asked, followed by all of the testers’ responses to our questions.

Here is the raw test data: