Andreas Niekler: is a postdoctoral research assistant at the Institute of Computer Science at the University of Leipzig in the Department of Natural Language Processing. He develops computer-based content analysis methods for the social sciences with a focus on machine learning and data management. In a former project he developed the content analysis platform Leipzig Corpus Minder (LCM). He is also a lecturer in text mining and data management and is involved in projects for the discussion and assessment of media quality and data journalism.

 

Gregor Wiedemann: works in the language technology (LT) group at Hamburg University. He studied political science and computer science in Leipzig and Miami. In 2016, he received his doctoral degree in computer science for his dissertation “Text Mining for Qualitative Data Analysis in the Social Sciences” from Leipzig University. Wiedemann has worked in several projects in the fields of digital humanities and computational social science in which he developed methods and workflows to analyze large text collections.

 

Christian Kahmann: Christian Kahmann has finished a master of Computer Science at University Leipzig in 2016. The topic of his master thesis was causal inference using gaussian processes. Since 2016 he is part of the academic staff in the department for natural language processing at the University Leipzig. His research fields are: Data Mining, Semantic Change and Causal Inference.

 

Kenan Erdogan: is a software engineer with a strong background in Scientific Computing. Before joining the Notebook-Team at GESIS he worked on the WikiWho project, making large online collaboration networks available to Social Science research.

 

Lisa Posch: is a doctoral student in the Department Computational Social Science at GESIS. Her research is focused on crowdsourcing and language models, and their application within the the field of Computational Social Science.

 

Arnim Bleier: is a postdoctoral researcher in the Department Computational Social Science at GESIS. His research interests are in the field of Computational Social Science, with an emphasis on Reproducible Research. In collaboration with social scientists, he develops Bayesian models for the content, structure and dynamics of social phenomena.

 

Gerhard Heyer: has studied Mathematical Logic and Philosophy at Cambridge University (Philosophy Tripos, Christ’s College 1973-1976, Robert-Birley Scholarship), and General Linguistics at the University of the Ruhr, where he received his Ph.D. in 1983. After research on AI based natural language processing at the University of Michigan, Ann Arbor, with support by the Alexander-von-Humboldt Foundation (Feodor-Lynen Scholarship) he has been working as a systems specialist and manager within activities on research and development in electronic publishing and natural language processing. Since April 1994, Gerhard Heyer holds the chair on Automatic Language Processing at the computer science department of the University of Leipzig. His field of research is focussed on automatic semantic processing of natural language text with applications in the area of information retrieval and search as well as knowledge management.