WP4: Innovations in Data Production

D4.19 Mapping of two indicative selected standards to the SSHOCro

This report documents the work undertaken within project Task 4.7 Modeling the SSHOC data life cycle and describes the process of mapping social science research metadata standards DDI Codebook and CMDI  to the SSHOC Reference Ontology (SSHOCro). The resulting mapping rules are also documented.

D4.3 Survey specific parallel corpora

This document describes the [MCSQ]: Multilingual Corpus of Survey Questionnaires (MCSQ), a database of survey questionnaires’ texts. The report summarizes technical information about Version 1.0 (Ada Lovelace) of the MCSQ, dated in June 2020. It links to the repository to access the code and files generating the database.

MS17 Open source CAT TM software selected

This report documents the selection criteria of an open source Computer Assisted Translation tool with Translation Memory functionalities that will be used in the translation research activities of Task 4.3. of the SSHOC project. The TAsk team describes the role of the milestone in the Task and the means of verification.

MS18 Beta version of automatic verification software available for testing

This report documents the availability of the Automatic Verification Tool (AVT) that is used in the translation research activities of Task 4.3 of the SSHOC project. The task team describes the role of the milestone and the means of verification.

MS 20 Selection of SSH metadata standards for mapping to SSHOCro

This report describes the action plan devised in the context of D4.19 "Mapping of two indicative selected standards to the SSHOCro" and the steps taken to achieve it up to. D4.19 focuses on testing information integration and harmonization by mapping selected metadata from metadata standards used in the Social Sciences and Humanities to the SSHOC Reference Ontology.

WP5: Innovations in Data Access

D5.17 Implementation plan for the archeological case study

In SSHOC Task 5.7 (Open Linked Data. Archaeology Case Study), a virtual reconstruction of the Roman theatre in Catania will be created as an example of an actual transition of archaeological data to the cloud, i.e. from data silos on individual computers to webservices. The case study is based on a unified workflow that starts with the archaeological documentation and results in a virtual reconstruction.

D5.2 Data access protocol for DBSS data, linked to survey data, conforming FAIR principles (Access to biomedical data)

The deliverable documents a data access plan for enhancing the availability of biomarker data from dried blood spot samples collected by SHARE. The procedure will be of interest to researchers, survey methodologists, and data archives providing biomedical data collected in survey settings.

D5.9 Framework and contract for international data use agreements on remote access to confidential data

The purpose of this report is to provide a template for a contract for international access to confidential microdata.

MS21 Protocol of laboratory processing of DBSS data

In cross-national population studies measuring health and life style factors in ageing by self-reported information is challenging due to several difficulties, such as socio-cultural differences in reporting style, social desirability and access to health care etc. Moreover, in older people a new health condition may remain unrecognised by sharing the same symptoms as an existing disease, or by having symptoms that are interpreted as a result of ageing per se. Also, cognitive decline or depressive symptoms may affect correct recollection.

MS22 Inventory of computing space needed for processing and analysing accelerometer data

This report documents the achievement of the Milestone 22 of the SSHOC project, which was to evaluate the feasibility and technical requirements to handle and perform analysis on accelerometer data in large studies. The two main challenges on data transfer and data preparation were identified and successfully addressed. Though the processing takes a significant amount of time and the implementation is cumbersome, it is feasible to do the data processing with standard office computers.