Friday 6th November Friday 13th November Friday 20th November Friday 27th November
Friday 4th December
Back to program
Making life easier: Demo session 2
|Friday 20th November, 11:45 - 13:15|
Mr Andrew Lovett-Barron (Knowsi) - Presenting Author
Knowsi manages consent for researchers, creating a GDPR (and regional equivalents)-compliant portal for researchers and participants to manage their consent relationship; now and in the future.
Consent is vital to performing ethical research, but the operational difficulty of tracking and adherence create barriers to all researchers. These barriers lead to unscrupulous practices in the private sector, and lost opportunity and friction in the academic context. With new and long-overdue privacy legislation like the GDPR in the EU, the CCPA in California, and PIPEDA in Canada, good consent-gathering practice is taking on legal liability dimensions in addition to the long-held ethical ones.
Related to the issue of consent gathering is one of recruitment and the operational overhead of performing research. Currently, much of the research administration task is one of recruiting, scheduling, consent gathering, and media collection. As a consequence of the varied nature of these needs, often data and identifiable information is held in insecure and fragmented settings. Knowsi provides a common platform to collect, track, and manage consent relationships between participants and researchers. Researchers have an easy time sending consent forms directly from the platform, or simply sharing either a customized or common link in their existing correspondence with the participant. After signing, participants receive a special receipt where they can access their consent forms afterwards, and per GDPR legislation, change their consent, contact the researcher for followup, and control their data. Ultimately, Knowsi aims to eliminate unethical or sloppy consent gathering practices in academia and the private sector, and to empower research participants with the privacy and respect they deserve.
In this demo, I hope to present the current version of Knowsi at Knowsi.com, discuss how it's been used in qualitative research practices thus far by customers and beta testers, and to run an interactive feedback and need-gathering session with the audience. Specifically, I hope to gather feedback on the needs related to integrations and the DIY workflows audience members have created for themselves.
Synthetic populations: strategies for impacting public health
Mr Sam Adams (RTI International) - Presenting Author
Mr Georgiy Bobashev (RTI International)
Synthetic populations may be defined as statistically and spatially accurate representations of persons, their families, and their related social structure. When loaded into a decision support system backed by microsimulation or agent-based modeling techniques, synthetic populations allow us to study complex social phenomena through construction of predictive scenarios. This allows us to test various intervention strategies against one another to determine the best course of action.
RTI’s process for creating baseline synthetic population data included three basic steps: (1) generating the synthesized households and agents (persons) from available census or survey sources, (2) generating reasonable spatial coordinates for households, and (3) making person assignments to schools, workplaces, and group quarters. Between 2004 and 2014 RTI was the first to demonstrate the utility for construction of national scale synthetic datasets for modeling contact-based infectious diseases.
More recently, and perhaps more broadly applicable, we have demonstrated additional utility in leveraging synthetic population datasets and methodology for dealing with sensitive data (PII, PHI) and most generically as a common data layer for fusion of multiple datasets into one, treating essentially the synthetic person record as the lowest common denominator and making projections of auxiliary data sources to this level through statistical models.
Our session will discuss the intersection of technology, artificial intelligence, and research and illustrate how synthetic populations can be applied to public health issues to solve complex challenges.
Mousetrap-web: An open, flexible survey paradata collection tool
Mr Felix Henninger (Mannheim Centre for European Social Research, University of Mannheim, Germany; Cognitive Psychology Lab, University of Koblenz-Landau, Germany) - Presenting Author
Dr Pascal J. Kieslich (Mannheim Centre for European Social Research, University of Mannheim, Germany; Experimental Psychology Lab, University of Mannheim, Germany)
Professor Frauke Kreuter (Mannheim Centre for European Social Research, University of Mannheim, Germany; University of Maryland, College Park, Maryland, USA; Institute for Employment Research, Mannheim, Germany)
Paradata accrue as a byproduct of the survey-based data collection process, providing further information about the provenance of any particular observation, in addition to the recorded responses of the participants. Paradata are of special scientific interest in web surveys because they provide insight into the otherwise unobservable process of questionnaire completion, allowing researchers to gather indicators for item difficulty, cursory or inattentive engagement with a questionnaire or participants leaving the page to access external information. Automated web surveys also stand to benefit most from paradata in practice, lacking the presence of a human interviewer who could respond to questions or react to confusion or hesitation on part of the respondent: Post-hoc flagging of responses for closer inspection, or real-time offers of assistance or clarification could improve the quality of data collected online. Despite their promise, however, tools for the systematic collection of paradata have heretofore only been available as part of proprietary software, and academic efforts at paradata collection have been limited to individual, one-shot projects.
In the current contribution, we therefore introduce and demonstrate mousetrap-web, a client-side in-browser paradata collection framework that records and logs the time series of participants’ interactions with a web page. This includes the overall time a participant spends on any given survey page, response latencies for individual questions, changes in response (for example corrections in text input or changes in selection in multiple choice questions), and cursor movements and clicks. To contextualize these data across different screen resolutions and aspect ratios, mousetrap-web also captures the screen layout generated by visitors’ browsers and the position of individual elements as well as pre-defined areas of interest on the page. By providing these additional metadata, mousetrap-web works independently of specific questionnaire tools and survey platforms, and is broadly applicable to any online user experience research. We discuss how mousetrap-web can be implemented in different survey frameworks and on generic web sites.
The data collected through mousetrap-web are compatible with the mousetrap companion package for R, which provides further post-processing and analysis functionality, summarising the raw time series data through several indices that capture various aspects of users’ interaction with the page. Both software packages are freely available as open-source packages from https://github.com/felixhenninger/mousetrap-web and https://pascalkieslich.github.io/mousetrap/ , respectively.
Follow the money: making sense of financial transaction data
Ms Carolina Mattsson (Northeastern University (soon Leiden University)) - Presenting Author
Payment systems passively record economic behavior at its most basic unit---individual transactions. Transaction records from these systems describe microeconomic actions at macroeconomic scales. Such data has been severely under-utilized, with one challenge being the disconnect between this type of data and classic methods for economic measurement. This demo will introduce "Follow the money", a data transformation and set of analysis tools that puts transaction data from payment systems in dialogue with the cash-flow and input-output approaches to economic measurement.
Follow-the-money is a computationally intensive algorithm that traces the movement of money through a payment system using the provider's own transaction data. The data is transformed from a sequence of financial transactions into to a set of weighted trajectories of money that correspond to observed flows of money. Each trajectory records a sequence of transactions, accounts, and durations encountered by particular units of money moving through that system. Follow-the-money tracks all funds in the system concurrently for all accounts as transactions appear in sequence (or in continuous time).
While this in no way reduces the quantity of data, the output of follow-the-money presents the same information in a much more useful form. Some highlights:
- Many user behaviors, such as depositing funds to pay a utility bill, result in sequences of transactions within payment system data. With trajectories, the dataset can be aggregated according to meaningful sequences instead.
- Most systems allow for deposits and withdrawals from various locations. The endpoints of trajectories can be aggregated into a network of money flow or, equivalently, and input-output table. For an example, see Figure 2 in https://arxiv.org/abs/1910.05596.
- Trajectories make the duration that money spends in each account something that can be directly studied. One can measure and visualize, for example, how quickly the average account draws down their salary.
- Trajectories can give a direct measurement of the velocity of money at far finer resolution than is currently the standard in economic measurement.
This demo would begin by explaining the follow-the-money approach, its requirements, and its limitations using illustrative toy examples. It would then give several examples of possible analyses on full-scale data, demonstrating the required input files, configuration files, and scripts.
An early release of the underlying code base is available on GitHub (https://github.com/carolinamattsson/follow_the_money). A new release is expected July, 2020.