Advanced search in Research products
Research products
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
Include:
The following results are related to Digital Humanities and Cultural Heritage. Are you interested to view more results? Visit OpenAIRE - Explore.
1,477 Research products, page 1 of 148

  • Digital Humanities and Cultural Heritage
  • Research data
  • Research software
  • Open Access
  • Dataset
  • ZENODO
  • SEDICI (UNLP) - Universidad Nacional de La Plata
  • Digital Humanities and Cultural Heritage

10
arrow_drop_down
Relevance
arrow_drop_down
  • Open Access
    Authors: 
    Padfield, Joseph; Kontiza, Kalliopi;
    Publisher: Zenodo
    Project: EC | CROSSCULT (693150)

    This dataset will contain all the data collected from observation and tracking location of the users participating in the experiments using the CROSSCULT Pilot 1app. It will include information about the users’ interaction with the NG collection information. The dataset will comprise anonymised information about the users who will participate in the NG visits, using the CROSSCULT app. Additionally, for users who give their permission, information can be automatically extracted/retrieved from their devices location. User agreement must be obtained before tracking and processing data about the user. The dataset generated from the observation will provide information on user behaviour. The exact make-up of the fields included in this dataset will be determined as part of the work carried out within CROSSCULT.

  • Research data . 2015
    Open Access
    Authors: 
    Selden Jr., Robert Z.;
    Publisher: Zenodo

    3D laser scan data for Caddo NAGPRA vessels from the Vanderpool site (41SM77) in Smith County, Texas. All vessels were scanned using a ZScanner700CX running VXElements 2.0 via the scanner direct control function in Geomagic Design X, and the collection can also be accessed here (http://crhr-archive.sfasu.edu/handle/123456789/92). Many thanks to the Caddo Nation of Oklahoma and the Gregg County Historical Museum for permissions and access.

  • Open Access Czech
    Authors: 
    Martin Kuna; Andrea Němcová; Ondřej Chvojka;
    Publisher: Zenodo

    Nálezy z archeologického výzkumu v Březnici (okr. Tábor) v letech 2005-2009 a 2019. Výzkum provedl O. Chvojka (Archeologický ústav FF JU v Českých Budějovicích). Data zahrnují údaje o keramice a dalších nálezech použitých k depoziční analýze sídlištních objektů mladší doby bronzové, zejména tzv. žlabů. Výsledky analýzy jsou publikovány v Chvojka et al. 2021. Popis databáze je obsažen v přiloženém PDF souboru. Podpořeno Grantovou agenturou ČR (18-10747S). Finds from the archaeological excavations in Březnice (Tábor district, South Bohemia, Czech Republic) in 2005-2009 and 2019. The fieldwork was directed by O. Chvojka (Institute of Archaeology, South Bohemian University in České Budějovice). Data concern the pottery fragments and other finds (daub, loom weights) used for the analysis of deposition processes in the Late Bronze Age settlement features. Based on this material, a model of house biography and the concept of closing rituals were formulated (see Chvojka et al. 2021). These models suggest an interpretation for the so-called trenches, specific sunken features filled with an unusually rich content of secondary-burnt pottery and other finds. Details of the database are given in the attached PDF file. Supported by the Czech Sceince Foundation (18-10747S). Chvojka, O. – Kuna, M. – Menšík, P. et al. 2021: Rituály ukončení a obnovy. Sídliště mladší doby bronzové v Březnici u Bechyně – Rituals of termination and renewal. The Late Bronze Age settlement in Březnice near Bechyně. České Budějovice – Praha – Plzeň. ISBN 978-80-7394-899-3; ISBN 978-80-7581-039-7; ISBN 978-80-261-1083-5.

  • Open Access English

    The OSDG Community Dataset (OSDG-CD) is a public dataset of thousands of text excerpts, which were validated by approximately 1,000 OSDG Community Platform (OSDG-CP) citizen scientists from over 110 countries, with respect to the Sustainable Development Goals (SDGs). Dataset Information In support of the global effort to achieve the Sustainable Development Goals (SDGs), OSDG is realising a series of SDG-labelled text datasets. The OSDG Community Dataset (OSDG-CD) is the direct result of the work of more than 1,000 volunteers from over 110 countries who have contributed to our understanding of SDGs via the OSDG Community Platform (OSDG-CP). The dataset contains tens of thousands of text excerpts (henceforth: texts) which were validated by the Community volunteers with respect to SDGs. The data can be used to derive insights into the nature of SDGs using either ontology-based or machine learning approaches. 📘 The file contains 40,067 text excerpts and a total of 277,524 assigned labels. Source Data The dataset consists of paragraph-length text excerpts derived from publicly available documents, including reports, policy documents and publication abstracts. A significant number of documents (more than 3,000) originate from UN-related sources such as SDG-Pathfinder and SDG Library. These sources often contain documents that already have SDG labels associated with them. Each text is comprised of 3 to 6 sentences and is about 90 words on average. Methodology All the texts are evaluated by volunteers on the OSDG-CP. The platform is an ambitious attempt to bring together researchers, subject-matter experts and SDG advocates from all around the world to create a large and accurate source of textual information on the SDGs. The Community volunteers use the platform to participate in labelling exercises where they validate each text's relevance to SDGs based on their background knowledge. In each exercise, the volunteer is shown a text together with an SDG label associated with it – this usually comes from the source – and asked to either accept or reject the suggested label. There are 3 types of exercises: All volunteers start with the mandatory introductory exercise that consists of 10 pre-selected texts. Each volunteer must complete this exercise before they can access 2 other exercise types. Upon completion, the volunteer reviews the exercise by comparing their answers with the answers of the rest of the Community using aggregated statistics we provide, i.e., the share of those who accepted and rejected the suggested SDG label for each of the 10 texts. This helps the volunteer to get a feel for the platform. SDG-specific exercises where the volunteer validates texts with respect to a single SDG, e.g., SDG 1 No Poverty. All SDGs exercise where the volunteer validates a random sequence of texts where each text can have any SDG as its associated label. After finishing the introductory exercise, the volunteer is free to select either SDG-specific or All SDGs exercises. Each exercise, regardless of its type, consists of 100 texts. Once the exercise is finished, the volunteer can either label more texts or exit the platform. Of course, the volunteer can finish the exercise early. All progress is saved and recorded still. To ensure quality, each text is validated by up to 9 different volunteers and all texts included in the public release of the data have been validated by at least 3 different volunteers. It is worth keeping in mind that all exercises present the volunteers with a binary decision problem, i.e., either accept or reject a suggested label. The volunteers are never asked to select one or more SDGs that a certain text might relate to. The rationale behind this set-up is that asking a volunteer to select from 17 SDGs is extremely inefficient. Currently, all texts are validated against only one associated SDG label. Column Description doi - Digital Object Identifier of the original document text_id - unique text identifier text - text excerpt from the document sdg - the SDG the text is validated against labels_negative - the number of volunteers who rejected the suggested SDG label labels_positive - the number of volunteers who accepted the suggested SDG label agreement - agreement score based on the formula \(agreement = \frac{|labels_{positive} - labels_{negative}|}{labels_{positive} + labels_{negative}}\) Further Information To learn more about the project, please visit the OSDG website and the official GitHub page. Do not hesitate to share with us your outputs, be it a research paper, a machine learning model, a blog post, or just an interesting observation. All queries can be directed to community@osdg.ai. This CSV file uses UTF-8 character encoding. For easy access on MS Excel, open the file using Data → From Text/CSV. Please split CSV data into different columns by using a TAB delimiter.

  • Research data . 2022
    Open Access Portuguese
    Authors: 
    Botica, Natália; Silva, José; Luís, Luís;
    Publisher: Zenodo

    Motivo desenhado a partir de ortofoto gerado no Agisoft, com o modelo 3D de levantamento fotogramétrico, no âmbito do Project RARAA- Repositório de Arte Rupestre de Acesso Aberto - COA/OVD/0097/2019 Escudo tipo "Caetra" a ser utilizado pelo guerreiro em combate com o cavaleiro - Sítio da Vermelhosa, Rocha 3, Côa Valley, Portugal

  • Research data . 2015
    Open Access
    Authors: 
    Selden Jr., Robert Z.;
    Publisher: Zenodo

    In 2014-2015, Caddo vessels from the Tuck Carpenter (41CP5) collection were scanned at the Center for Regional Heritage Research. These scans were generated for use in a study of 3D geometric morphometrics and for public outreach. Many thanks to the Caddo Nation of Oklahoma and the Anthropology and Archaeology Laboratory for the requisite permissions and access.

  • Open Access English
    Authors: 
    Javier Sanz Rodrigo;
    Publisher: Zenodo

    This is the input data used in Wakebench MOST benchmark.

  • Open Access
    Authors: 
    Crymble, Adam; Falcini, Louise; Hitchcock, Tim;
    Publisher: Zenodo

    This dataset should be used instead of the earlier version (https://zenodo.org/record/13103). This updated dataset makes accessible the uniquely comprehensive records of vagrant removal from, through, and back to Middlesex, encompassing the details of some 14,789 men and women removed (either forcibly or voluntarily) as undesirables between 1777 and 1786. In includes people ejected from London as vagrants, and those sent back to London from counties beyond. Significant background material is available on the 'London Lives' website, which provides additional context for these records. The authors also recommend the following article: Tim Hitchcock, Adam Crymble, and Louise Falcini, ‘Loose, Idle and Disorderly: Vagrant Removal in Late Eighteenth-Century Middlesex’, _Social History_. Each record includes details on the name of the vagrant, his or her parish of legal settlement, where they were picked up by the vagrant contractor, where they were dropped off, as well as the name of the magistrate who had proclaimed them a vagrant. Each entry is georeferenced, to make it possible to follow the journeys of thousands of failed migrants and temporary Londoners back to their place of origin in the late eighteenth century. Each entry has 29 columns of data, all of which are described in the READ ME file. The original records were created by Henry Adams, the vagrant contractor of Middlesex who had - as had his father before him - conveyed vagrants from Middlesex gaols to the edge of the county where they would be sent onwards towards their parish of legal settlement. His role also involved picking up vagrants on their way back to Middlesex, expelled from elsewhere, as well as those being shepherded through to counties beyond, as part of the national network of removal. Eight times per year at each session of the Middlesex Bench, Adams submitted lists of vagrants conveyed as proof of his having transported these individuals, after which he would be paid for his services. The dataset contains all 42 surviving lists out of a possible 65.The gaps in the records are unfortunately not evenly spaced throughout the year. We know more, for example, about removal in October than in May. Spellings have been interpreted and standardized when possible. Georeferences have been added when they could be identified. This dataset was created for 21st century historians, and should not be construed as a true transcription of the original sources. Instead the goal was to use a limited vocabulary and to interpret the entries rather than recreate them verbatim. While this is undesirable for anyone interested in spelling variations of names and place names in the eighteenth century, it is the authors' hope that these interpretations will make it easier to conduct quantitative analysis and studies in historical geography. This dataset has been published with additional contextual information in the Journal of Open Humanities Data. It can be found at the following location: Crymble, A, Falcini, L and Hitchcock, T 2015 Vagrant Lives: 14,789 Vagrants Processed by the County of Middlesex, 1777–1786. Journal of Open Humanities Data 1: e1, DOI: http://dx.doi.org/10.5334/johd.1

  • Open Access
    Authors: 
    Sohail, Mashaal; Maier, Robert M.; Ganna, Andrea; Bloemendal, Alex; Martin, Alicia R.; Turchin, Michael C.; Chang, Charleston W. K.; Hirschhorn, Joel; Daly, Mark J.; Patterson, Nick; +4 more
    Publisher: Data Archiving and Networked Services (DANS)
    Project: NIH | Population mixture in evo... (1R01GM100233-01), NIH | Powering whole genome seq... (3U01HG009088-04S4), NIH | Leveraging functional dat... (2R01HG006399-10A1), NIH | The origin, the function ... (5R35GM127131-04), NIH | Statistical methods for s... (5R01MH101244-02)

    UK Biobank custom height association statistics on ~700k genotyped SNPsThe zip file contains six files: (1) ukb_cal_v2_height_allancestry_10pcs_assoc_linear.tsv (2) ukb_cal_v2_height_allancestry_nopcs_assoc_linear.tsv (3) ukb_cal_v2_height_britishancestry_10pcs_assoc_linear.tsv (4) ukb_cal_v2_height_britishancestry_nopcs_assoc_linear.tsv (5) ukb_cal_v2_height_sibs_perm_qfam.tsv (6) ukb_cal_v2_height_wbsibs_perm_qfam.tsv (1) - (4) are height GWAS estimates on all samples / white British samples using 10 PCs as covariates or no PCs as covariates. Sex was included as covariate in all analyses. (3) is equivalent to the UK Biobank height GWAS from the Neale lab. The remaining small differences can be explained by genotype differences in the UK Biobank imputed data and genotyped data. (5) and (6) are family based estimates from 20166 sibling pairs of any ancestry (5) and 17358 sibling pairs where both siblings are of white British ancestry (6) in the UK Biobank. Pairs of samples with IBS0 > 0.0018 and Kinship coefficient > 0.185 were identified as sibling pairs. For the analyses in Sohail, Maier et al., only the subset of ~300,000 SNPs with SDS scores was used. For a description of the columns in files (1)-(4) please see the PLINK documentation for the ‘--linear’ command. Column “A2” has been added and denotes the non-effect allele. For a description of the columns in files (5) and (6) please see the PLINK documentation for the ‘--qfam’ command. Column “A2” has been added and denotes the non-effect allele. “EMP1” and “NP” refer to permutation p-value and number of permutations, respectively. Please note: These data are derived from the UK Biobank Resource under Application Number 18597.sohail_maier_2018.zip Genetic predictions of height differ among human populations and these differences have been interpreted as evidence of polygenic adaptation. These differences were first detected using SNPs genome-wide significantly associated with height, and shown to grow stronger when large numbers of sub-significant SNPs were included, leading to excitement about the prospect of analyzing large fractions of the genome to detect polygenic adaptation for multiple traits. Previous studies of height have been based on SNP effect size measurements in the GIANT Consortium meta-analysis. Here we repeat the analyses in the UK Biobank, a much more homogeneously designed study. We show that polygenic adaptation signals based on large numbers of SNPs below genome-wide significance are extremely sensitive to biases due to uncorrected population structure. More generally, our results imply that typical constructions of polygenic scores are sensitive to population structure and that population-level differences should be interpreted with caution.

Advanced search in Research products
Research products
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
Include:
The following results are related to Digital Humanities and Cultural Heritage. Are you interested to view more results? Visit OpenAIRE - Explore.
1,477 Research products, page 1 of 148
  • Open Access
    Authors: 
    Padfield, Joseph; Kontiza, Kalliopi;
    Publisher: Zenodo
    Project: EC | CROSSCULT (693150)

    This dataset will contain all the data collected from observation and tracking location of the users participating in the experiments using the CROSSCULT Pilot 1app. It will include information about the users’ interaction with the NG collection information. The dataset will comprise anonymised information about the users who will participate in the NG visits, using the CROSSCULT app. Additionally, for users who give their permission, information can be automatically extracted/retrieved from their devices location. User agreement must be obtained before tracking and processing data about the user. The dataset generated from the observation will provide information on user behaviour. The exact make-up of the fields included in this dataset will be determined as part of the work carried out within CROSSCULT.

  • Research data . 2015
    Open Access
    Authors: 
    Selden Jr., Robert Z.;
    Publisher: Zenodo

    3D laser scan data for Caddo NAGPRA vessels from the Vanderpool site (41SM77) in Smith County, Texas. All vessels were scanned using a ZScanner700CX running VXElements 2.0 via the scanner direct control function in Geomagic Design X, and the collection can also be accessed here (http://crhr-archive.sfasu.edu/handle/123456789/92). Many thanks to the Caddo Nation of Oklahoma and the Gregg County Historical Museum for permissions and access.

  • Open Access Czech
    Authors: 
    Martin Kuna; Andrea Němcová; Ondřej Chvojka;
    Publisher: Zenodo

    Nálezy z archeologického výzkumu v Březnici (okr. Tábor) v letech 2005-2009 a 2019. Výzkum provedl O. Chvojka (Archeologický ústav FF JU v Českých Budějovicích). Data zahrnují údaje o keramice a dalších nálezech použitých k depoziční analýze sídlištních objektů mladší doby bronzové, zejména tzv. žlabů. Výsledky analýzy jsou publikovány v Chvojka et al. 2021. Popis databáze je obsažen v přiloženém PDF souboru. Podpořeno Grantovou agenturou ČR (18-10747S). Finds from the archaeological excavations in Březnice (Tábor district, South Bohemia, Czech Republic) in 2005-2009 and 2019. The fieldwork was directed by O. Chvojka (Institute of Archaeology, South Bohemian University in České Budějovice). Data concern the pottery fragments and other finds (daub, loom weights) used for the analysis of deposition processes in the Late Bronze Age settlement features. Based on this material, a model of house biography and the concept of closing rituals were formulated (see Chvojka et al. 2021). These models suggest an interpretation for the so-called trenches, specific sunken features filled with an unusually rich content of secondary-burnt pottery and other finds. Details of the database are given in the attached PDF file. Supported by the Czech Sceince Foundation (18-10747S). Chvojka, O. – Kuna, M. – Menšík, P. et al. 2021: Rituály ukončení a obnovy. Sídliště mladší doby bronzové v Březnici u Bechyně – Rituals of termination and renewal. The Late Bronze Age settlement in Březnice near Bechyně. České Budějovice – Praha – Plzeň. ISBN 978-80-7394-899-3; ISBN 978-80-7581-039-7; ISBN 978-80-261-1083-5.

  • Open Access English

    The OSDG Community Dataset (OSDG-CD) is a public dataset of thousands of text excerpts, which were validated by approximately 1,000 OSDG Community Platform (OSDG-CP) citizen scientists from over 110 countries, with respect to the Sustainable Development Goals (SDGs). Dataset Information In support of the global effort to achieve the Sustainable Development Goals (SDGs), OSDG is realising a series of SDG-labelled text datasets. The OSDG Community Dataset (OSDG-CD) is the direct result of the work of more than 1,000 volunteers from over 110 countries who have contributed to our understanding of SDGs via the OSDG Community Platform (OSDG-CP). The dataset contains tens of thousands of text excerpts (henceforth: texts) which were validated by the Community volunteers with respect to SDGs. The data can be used to derive insights into the nature of SDGs using either ontology-based or machine learning approaches. 📘 The file contains 40,067 text excerpts and a total of 277,524 assigned labels. Source Data The dataset consists of paragraph-length text excerpts derived from publicly available documents, including reports, policy documents and publication abstracts. A significant number of documents (more than 3,000) originate from UN-related sources such as SDG-Pathfinder and SDG Library. These sources often contain documents that already have SDG labels associated with them. Each text is comprised of 3 to 6 sentences and is about 90 words on average. Methodology All the texts are evaluated by volunteers on the OSDG-CP. The platform is an ambitious attempt to bring together researchers, subject-matter experts and SDG advocates from all around the world to create a large and accurate source of textual information on the SDGs. The Community volunteers use the platform to participate in labelling exercises where they validate each text's relevance to SDGs based on their background knowledge. In each exercise, the volunteer is shown a text together with an SDG label associated with it – this usually comes from the source – and asked to either accept or reject the suggested label. There are 3 types of exercises: All volunteers start with the mandatory introductory exercise that consists of 10 pre-selected texts. Each volunteer must complete this exercise before they can access 2 other exercise types. Upon completion, the volunteer reviews the exercise by comparing their answers with the answers of the rest of the Community using aggregated statistics we provide, i.e., the share of those who accepted and rejected the suggested SDG label for each of the 10 texts. This helps the volunteer to get a feel for the platform. SDG-specific exercises where the volunteer validates texts with respect to a single SDG, e.g., SDG 1 No Poverty. All SDGs exercise where the volunteer validates a random sequence of texts where each text can have any SDG as its associated label. After finishing the introductory exercise, the volunteer is free to select either SDG-specific or All SDGs exercises. Each exercise, regardless of its type, consists of 100 texts. Once the exercise is finished, the volunteer can either label more texts or exit the platform. Of course, the volunteer can finish the exercise early. All progress is saved and recorded still. To ensure quality, each text is validated by up to 9 different volunteers and all texts included in the public release of the data have been validated by at least 3 different volunteers. It is worth keeping in mind that all exercises present the volunteers with a binary decision problem, i.e., either accept or reject a suggested label. The volunteers are never asked to select one or more SDGs that a certain text might relate to. The rationale behind this set-up is that asking a volunteer to select from 17 SDGs is extremely inefficient. Currently, all texts are validated against only one associated SDG label. Column Description doi - Digital Object Identifier of the original document text_id - unique text identifier text - text excerpt from the document sdg - the SDG the text is validated against labels_negative - the number of volunteers who rejected the suggested SDG label labels_positive - the number of volunteers who accepted the suggested SDG label agreement - agreement score based on the formula \(agreement = \frac{|labels_{positive} - labels_{negative}|}{labels_{positive} + labels_{negative}}\) Further Information To learn more about the project, please visit the OSDG website and the official GitHub page. Do not hesitate to share with us your outputs, be it a research paper, a machine learning model, a blog post, or just an interesting observation. All queries can be directed to community@osdg.ai. This CSV file uses UTF-8 character encoding. For easy access on MS Excel, open the file using Data → From Text/CSV. Please split CSV data into different columns by using a TAB delimiter.

  • Research data . 2022
    Open Access Portuguese
    Authors: 
    Botica, Natália; Silva, José; Luís, Luís;
    Publisher: Zenodo

    Motivo desenhado a partir de ortofoto gerado no Agisoft, com o modelo 3D de levantamento fotogramétrico, no âmbito do Project RARAA- Repositório de Arte Rupestre de Acesso Aberto - COA/OVD/0097/2019 Escudo tipo "Caetra" a ser utilizado pelo guerreiro em combate com o cavaleiro - Sítio da Vermelhosa, Rocha 3, Côa Valley, Portugal

  • Research data . 2015
    Open Access
    Authors: 
    Selden Jr., Robert Z.;
    Publisher: Zenodo

    In 2014-2015, Caddo vessels from the Tuck Carpenter (41CP5) collection were scanned at the Center for Regional Heritage Research. These scans were generated for use in a study of 3D geometric morphometrics and for public outreach. Many thanks to the Caddo Nation of Oklahoma and the Anthropology and Archaeology Laboratory for the requisite permissions and access.

  • Open Access English
    Authors: 
    Javier Sanz Rodrigo;
    Publisher: Zenodo

    This is the input data used in Wakebench MOST benchmark.

  • Open Access
    Authors: 
    Crymble, Adam; Falcini, Louise; Hitchcock, Tim;
    Publisher: Zenodo

    This dataset should be used instead of the earlier version (https://zenodo.org/record/13103). This updated dataset makes accessible the uniquely comprehensive records of vagrant removal from, through, and back to Middlesex, encompassing the details of some 14,789 men and women removed (either forcibly or voluntarily) as undesirables between 1777 and 1786. In includes people ejected from London as vagrants, and those sent back to London from counties beyond. Significant background material is available on the 'London Lives' website, which provides additional context for these records. The authors also recommend the following article: Tim Hitchcock, Adam Crymble, and Louise Falcini, ‘Loose, Idle and Disorderly: Vagrant Removal in Late Eighteenth-Century Middlesex’, _Social History_. Each record includes details on the name of the vagrant, his or her parish of legal settlement, where they were picked up by the vagrant contractor, where they were dropped off, as well as the name of the magistrate who had proclaimed them a vagrant. Each entry is georeferenced, to make it possible to follow the journeys of thousands of failed migrants and temporary Londoners back to their place of origin in the late eighteenth century. Each entry has 29 columns of data, all of which are described in the READ ME file. The original records were created by Henry Adams, the vagrant contractor of Middlesex who had - as had his father before him - conveyed vagrants from Middlesex gaols to the edge of the county where they would be sent onwards towards their parish of legal settlement. His role also involved picking up vagrants on their way back to Middlesex, expelled from elsewhere, as well as those being shepherded through to counties beyond, as part of the national network of removal. Eight times per year at each session of the Middlesex Bench, Adams submitted lists of vagrants conveyed as proof of his having transported these individuals, after which he would be paid for his services. The dataset contains all 42 surviving lists out of a possible 65.The gaps in the records are unfortunately not evenly spaced throughout the year. We know more, for example, about removal in October than in May. Spellings have been interpreted and standardized when possible. Georeferences have been added when they could be identified. This dataset was created for 21st century historians, and should not be construed as a true transcription of the original sources. Instead the goal was to use a limited vocabulary and to interpret the entries rather than recreate them verbatim. While this is undesirable for anyone interested in spelling variations of names and place names in the eighteenth century, it is the authors' hope that these interpretations will make it easier to conduct quantitative analysis and studies in historical geography. This dataset has been published with additional contextual information in the Journal of Open Humanities Data. It can be found at the following location: Crymble, A, Falcini, L and Hitchcock, T 2015 Vagrant Lives: 14,789 Vagrants Processed by the County of Middlesex, 1777–1786. Journal of Open Humanities Data 1: e1, DOI: http://dx.doi.org/10.5334/johd.1

  • Open Access
    Authors: 
    Sohail, Mashaal; Maier, Robert M.; Ganna, Andrea; Bloemendal, Alex; Martin, Alicia R.; Turchin, Michael C.; Chang, Charleston W. K.; Hirschhorn, Joel; Daly, Mark J.; Patterson, Nick; +4 more
    Publisher: Data Archiving and Networked Services (DANS)
    Project: NIH | Population mixture in evo... (1R01GM100233-01), NIH | Powering whole genome seq... (3U01HG009088-04S4), NIH | Leveraging functional dat... (2R01HG006399-10A1), NIH | The origin, the function ... (5R35GM127131-04), NIH | Statistical methods for s... (5R01MH101244-02)

    UK Biobank custom height association statistics on ~700k genotyped SNPsThe zip file contains six files: (1) ukb_cal_v2_height_allancestry_10pcs_assoc_linear.tsv (2) ukb_cal_v2_height_allancestry_nopcs_assoc_linear.tsv (3) ukb_cal_v2_height_britishancestry_10pcs_assoc_linear.tsv (4) ukb_cal_v2_height_britishancestry_nopcs_assoc_linear.tsv (5) ukb_cal_v2_height_sibs_perm_qfam.tsv (6) ukb_cal_v2_height_wbsibs_perm_qfam.tsv (1) - (4) are height GWAS estimates on all samples / white British samples using 10 PCs as covariates or no PCs as covariates. Sex was included as covariate in all analyses. (3) is equivalent to the UK Biobank height GWAS from the Neale lab. The remaining small differences can be explained by genotype differences in the UK Biobank imputed data and genotyped data. (5) and (6) are family based estimates from 20166 sibling pairs of any ancestry (5) and 17358 sibling pairs where both siblings are of white British ancestry (6) in the UK Biobank. Pairs of samples with IBS0 > 0.0018 and Kinship coefficient > 0.185 were identified as sibling pairs. For the analyses in Sohail, Maier et al., only the subset of ~300,000 SNPs with SDS scores was used. For a description of the columns in files (1)-(4) please see the PLINK documentation for the ‘--linear’ command. Column “A2” has been added and denotes the non-effect allele. For a description of the columns in files (5) and (6) please see the PLINK documentation for the ‘--qfam’ command. Column “A2” has been added and denotes the non-effect allele. “EMP1” and “NP” refer to permutation p-value and number of permutations, respectively. Please note: These data are derived from the UK Biobank Resource under Application Number 18597.sohail_maier_2018.zip Genetic predictions of height differ among human populations and these differences have been interpreted as evidence of polygenic adaptation. These differences were first detected using SNPs genome-wide significantly associated with height, and shown to grow stronger when large numbers of sub-significant SNPs were included, leading to excitement about the prospect of analyzing large fractions of the genome to detect polygenic adaptation for multiple traits. Previous studies of height have been based on SNP effect size measurements in the GIANT Consortium meta-analysis. Here we repeat the analyses in the UK Biobank, a much more homogeneously designed study. We show that polygenic adaptation signals based on large numbers of SNPs below genome-wide significance are extremely sensitive to biases due to uncorrected population structure. More generally, our results imply that typical constructions of polygenic scores are sensitive to population structure and that population-level differences should be interpreted with caution.