Gift Cards for CUTGroup

The Code for America Chattanooga team recently expressed interest in learning more about the operations of CUTGroup to begin their own civic user testing group. Smart Chicago is dedicated to openly documenting  our work for everyone to use. The Chattanooga team reviewed our documentation, we talked further and realized we were missing information about gift card mechanisms.  In this blog post, we will share what we learned about gift cards.

Once a resident signs up to be part of the CUTGroup, we send them a $5 VISA gift card. If and when they are chosen to test a civic app, we give them a $20 VISA gift card.

When purchasing gift cards for your program, there are three main considerations: type of cards, cost and fees associated per card, and quantity and expiration date.

Types of Gift Cards

There are different types of gift cards out there – prepaid VISA of Mastercard gift cards, store-issued, bank-issued, online, etc.

We considered Amazon gift cards as an option; these gift cards do not have  card processing fees, no expiration dates, you can choose small face values (as low as $0.15), and they can be sent directly from e-mail. This is a very convenient option, but by choosing this service we would assume that CUTGroup members have regular access to their e-mail, know how to use  Amazon, and know how to access and use the gift card for their purchase. If someone is not an Amazon user, this could be a barrier to joining the CUTGroup. We want to include everyone, and this just does not work.

Other gift card types also include specific store- or bank-issued gift cards. Sites like ScripSmart  can provide comparisons between gift cards, and can give you an idea of what you need to ask about before purchasing your own cards. Again, our goal is to provide the most flexible currency possible, so this does not work for us.

We purchase VISA gift cards specifically because they can be broadly used in different locations. It is important that CUTGroup spans all different types of residents in Chicago, and these gift cards should fit in the normal course of these residents’ lives. By purchasing these cards, we are spending more than face value on fees, and have to take time to mail them out, but the value of accessible gift cards is worthwhile to the goals of our organization and this program.

Costs and Fees

It can be hard to find companies that offer Mastercard or Visa gift cards in values smaller than $20. In addition, there are a number of costs and fees associated with each gift card. With our first vendor, we spent around $10 for each $5 sign-up card that we sent out. Here is a list of our costs associated with a 100 card order:

  • Face-value of card: $5.00 per card
  • Card processing fee: $3.95 per card
  • Credit card processing fee: $1.00 per card
  • Shipping Fee: $21.95
  • Total: $10.17 per card

The card processing fee can be higher or lower depending on the quantity of cards purchased, how you are planning to pay, and lastly the length of expiration date.

With this in mind, we researched and found an option to lower our fees and have a longer expiration date through a new vendor, Awards2Go Visa Award Card. We were able to lower the cost per card to approximately $7.07 for every $5 sign-up card.  Here are the new costs associated with a 100 card order:

  • Face-value of card: $5.00
  • Card processing fee: $1.75
  • Credit Card processing fee (1% of total order): $5.00
  • Shipping Fee: $27.00
  • Total: $7.07 per card

Expiration Date

Last October, we learned that we still had a lot of gift cards that were about to expire and some were already expired. With the gift cards that expired, we lost out because the cost of the fees to “restock” the cards would be higher than the value of the cards we would receive.  We also had 118 $20 and 103 $5 gift cards that were going to expire at the end of November. If we sent back these cards to the vendor, we would only receive $10 for each $20 gift card.

We thought of some creative ways to use the cards including a refer a friend campaign, and a remote CUTGroup test that allowed many testers to participate. Still, some of our testers received cards late (after the expiration date), and we had to send new cards to our testers. We now have gift cards  that have expiration dates set for 8 years later, with the value decreasing from the card after 13 months of receiving the cards (-$2.50/month). This gives CUTGroup participants a longer time frame to use the cards, but at the same time we still have to be mindful of the quantities we purchase to ensure we can use them before they lose value.

Final Thoughts

The success of CUTGroup operations is based on the quality of engagement with our residents and the open communication we have about our process. When our gift cards were about to expire, we told our CUTGroup members so they knew. When gift cards came too late, they e-mailed us to let us know, and we sent out a new card. These everyday conversations let us build a better community with Chicago residents around data and technology, and help us be better at what we do.

CUTGroup is an important program to Smart Chicago because it cuts across three of our areas of focus on access, skills, and data. Not only does it allow residents to connect around data and technology, it also creates meaningful communication between developers and residents. We will continue to share our processes around our programs in hopes that our experiences are useful to everyone.

Exploring the Chicago Works for You Dataset

Editor’s note: there is a massive set of data behind the Smart Chicago Chicago Works for You (CWFY) product– a citywide dashboard with three million requests, across fourteen services, all drawn from Chicago’s Open311 system.  I asked Q. Ethan McCallum, a Chicago-based consultant focused on helping businesses to succeed and improve through practical use of data analysis, to review the data and see what we could learn. Here’s his take. –DXO

Drafting a game plan

Our first order of business was a brainstorming session, in order to establish a plan of action and narrow the scope of our efforts.

CWFY data includes temporal and spatial elements, in addition to the service request counts. That makes for a very rich dataset and an open-ended analysis effort. At the same time, we wanted to this to be a reasonably simple, lighthearted exercise and we were under time constraints.

After some brainstorming, we chose to limit our exploration to the daily counts of service requests, and we would search for interesting and non-obvious connections.  Specifically, we wanted to look for unexpected correlations in the data: would two services experience similar movement in request volume? and if so, would we be surprised by the connection?

Statistically speaking, correlation is expressed as a numeric measure called the Pearson correlation coefficient.  Often written as as r, it can take on values from -1 through 1.  A value of 1 means the two series of numbers — in our case, daily counts of service requests — move in perfect unison.  (Movement in one series doesn’t necessarily cause movement in the other, mind you; it just means that they walk in lock-step.)  A score of -1 means that the two series move in opposite ways: if one value increases, the other decreases.  When R is 0, the two series’ movements have no clear relation.

Interpretation of r is somewhat subjective: we can say that values “close to” 1 or -1 indicate a strong relationship and values “near” 0 do not indicate a relationship; but there is no clear-cut definition of “close” in this case.  People often say that r greater than 0.7 or less than -0.7 indicates reasonable positive or negative correlation, respectively.  Then again, you may want something closer to 0.9 or -0.9 if you’re trying to make a strong case or working on sensitive matters.

In search of the non-obvious

Once we had established our game plan, we fired up some R and Python and got to work.  (Note that this is a rarity in the world of data analysis.  Data prep and cleanup often account for most of the work in an analysis effort. Because of how we’d built CWFY, the data was already clean and in the format we needed.)

We found a number of reasonably-strong correlations among the request types — with r values ranging from 0.60 to 0.76 — but we were disappointed that none struck us as particularly out of line or surprising. For example:

  • There are three request types related to broken street lights (street_light_1_out, alley_light_out, street_lights_all_out, if you’re looking at the raw data) and they exhibit similar shape in call volume. One plausible explanation is that several people call to report the same issue, which gets filed under different categories.
  • Building and sanitation violations move in similar fashion. In this case, callers may express different concerns about the same issue, so 311 puts those in different buckets.
  • Rodent complaints sometimes pair up with broken street lights. Perhaps the rats felt more free to move about under cover of darkness?

These are just hypotheses, therefore deserving of additional research. That said, we were looking for correlations that did not make immediate sense. None of these fit the bill, so we decided to look elsewhere.

Blame it on the rain?

When you don’t find anything of interest in one dataset, you can sometimes mash it up with another. Our data had a temporal element — number of service requests over time — so it made sense to pair it up with other time-based data related to Chicago. That led us to everyone’s favorite small-talk topic, the weather.

Our goal was to see whether 311 service request volume would match up with temperature patterns or rain storms. In particular, we asked ourselves:

  • If the temperature rises above 90 degrees, do people report more cases of abandoned vehicles?
  • Do 311 calls for tree debris coincide with rain storms?

Acquiring weather data is surprisingly painless. The National Climatic Data Center provides an API for storm events, and downloads for daily temperature reports.

While this data was very clean and easy to work with, it did not provide strong support for our new hypotheses: in both cases, r fell between 0 and 0.08.  While many people will debate what is “reasonably close” to 0, we decided that 0.08 was close enough for us.   (We readily admit that the first question was a lark, but the second one seemed quite reasonable.)

What next?

We took a very quick, cursory glance at the CWFY data to poke at a couple of fun questions. This was hardly a thorough data analysis on which we would base decisions, but instead a whimsical trip across a new dataset. Our data excursion left us empty-handed, but it was fun nonetheless.

In most data analysis exercises, this is when we would have returned to the proverbial drawing board to formulate new questions, but we had reached our time limit.

Now, it’s your turn: you probably have your own ideas, and the opportunity to perform more in-depth research. We make the CWFY data available via our API, which means you are free to build on it and explore it to at your leisure. Where will your experiments take you?

If you’re eager to work the data, but aren’t sure where to begin, we’ve included a short list of starter ideas below.

Have you found an interesting perspective on the CWFY data? Please let us know. We encourage people who anaylze or build apps on the data to contact us at [email protected]. We look forward to hearing from you.

The Chicago Works for You Data: starter ideas for analysis projects

Short on ideas? Please try these:

  • deduplication: try to identify when the same issue was reported under different names (e.g., the various street light incidents)
  • take a deeper look at the correlations we found: hypothesize as to why those correlations may exist, then find evidence to test that hypothesis
  • geospatial analysis: break down the requests by ward, and see whether the correlations remain. (For example: what if it was sheer coincidence that rat complaints moved in tandem with graffiti removal? What if those calls were on opposite sides of town?) Also, see what new correlations arise.
  • time series analysis: shift the data forward and backward in search of lagged correlations (for example, “a rise in calls for Request Type X often predate a similar rise in calls for Request Type Y”)
  • blend with other data sets: for example, pair up the CWFY data with something from the Chicago Data Portal.

New to data analysis?   You may find the following books helpful:

  • R in Action (Kabacoff) – how to use the R statistical toolkit to explore data
  • Bad Data Handbook (McCallum) – a series of contributed tales on working through data problems
  • Python for Data Analysis (McKinney) – use the popular Python programming language to analyze your data

Civic Innovation Toolkit: Twilio

Twilio is a cloud communications platform that allows web apps to make and receive phone calls and SMS text messages. You’ve probably used Twilio at some point even if you’re weren’t aware of it. If you’ve ever received a text message when your cab has arrived, your food gets sent out for delivery, or if you’ve received text messages from campaigns – you probably were interacting using Twilio.  The Smart Chicago Collaborative offers Twilio to developers in Chicago looking to build civic apps to solve civic problems in Chicago as part of our developer resource offerings.

 

The real strength of Twilio is ease of use. With just a little bit of time and code, you can create civic apps that send out SMS messages or make phone calls. Below the fold, Twilio’s representative in Chicago Greg Baugues gives us a demo of the tool.

Continue reading

Smart Chicago Collaborative Cited as Influencing Federal Digital Communications

Here at Smart Chicago, we seek to use existing tools to get things done. That’s why we launched our Annotations Program last year to publish rich text-based annotations of dense government documents like municipal code, RFPs, contracts, and other documents of this nature.

Today, the GSA cited us an example in launching their own annotated content: GSA Introduces News Genius to Decode Government Web

The federal government joins the City of Chicago’s Smart Chicago Collaborative, MIT, Harvard and other leading colleges and universities in putting News Genius to practical uses. The Smart Chicago Collaborative opens contracts, municipal code, request for proposals and more. The edX online education initiative, including MIT, Harvard and the Smithsonian Institution, similarly uses the annotation platform to unlock deeper understandings of course materials.

Help us annotate!

The Launch of the Chicago School of Data Project

Smart Chicago has started work on the Chicago School of Data Project, which has three main components:

  • Convene a core group of practitioners in Chicago who are using data to improve the lives of regular residents
  • Document and map the landscape of data activity in Chicago— the entities, tasks, companies, enterprises, civil service organizations, and others who make up the field
  • Plan a region-wide event in early autumn where we will share this mapping work with the larger data community. We seek to showcase all of the activity underway through capacity-building workshops and demonstrations

From this project, we hope to develop a collaborative framework and tools for improving connections across the Chicago data ecosystem– the Chicago School of Data.

Matt Gee,  a respected leader in the Chicago data community, has been hired to lead this project.  Here’s a look at the work ahead:

  • Convene small-group discussions with key partners to help us frame the work and make sure that we see the entire discipline
  • Lead larger convenings of 20 – 30 people from a wider group of stakeholders to understand needs, identify opportunities, and plan for events
  • Organize a city-wide data census with volunteer data ambassadors canvassing organizations to understand what’s happening now
  • Define the scope, breadth, time period, venue, and zeitgeist of the event itself, in concert with the stakeholders
  • Review existing documents, including grant agreements to practitioners, blog posts from the field, evaluations of existing market activity,  the Urban Institute assessment, entries from our city-wide data census, and documentation of conversations conducted throughout the project
  • Define the landscape of data work in Chicago and compile a cohesive narrative that gives shape, direction, and clarity to all included
  • Recruit speakers, teachers, and panelists for the event and work with them on their content

This is a lot of work. It will only be of value if it is inclusive and exhaustive. If you think what we’re saying speaks to you— if you have any inkling that you use data to improve lives in Chicago— we want to hear from you. Even if we’re already deep partners, and talk to each other every day, please complete this form.

If you are interested in helping out on the project itself, we need people to conduct interviews and help others complete the form to get their voices heard. If you’re interested in helping on this, please let us know here.