Advanced search in Research products
Research products
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
The following results are related to Neuroinformatics. Are you interested to view more results? Visit OpenAIRE - Explore.

  • Neuroinformatics
  • Open Access
  • Research data
  • English

Date (most recent)
arrow_drop_down
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Yeung, Jonas; DeYoung, Taylor; Spring, Shoshana; de Guzman, A. Elizabeth; +5 Authors

    Description Biological sex influences prevalence of developmental disorders through sex hormones and sex chromosomes. However, our understanding of their impacts in neurodevelopment and response to injury remains limited. In this project, we use high resolution magnetic resonance imaging (MRI) to investigate the four core genotype mouse model (FCG) that separates the influences of sex hormones and sex chromosomes during normal brain development and after cranial radiation therapy. Sex differences are attributed to either sex hormones or sex chromosomes. This can be distinguished by the FCG model which decouples the sex determining region (SRY) from the Y chromosome by moving SRY onto an autosome. This gives us four core sex genotypes: XX NULL, XY NULL, XX SRY, and XY SRY. This dataset represents the most comprehensive mouse brain imaging study employing the FCG model to date with 5 timepoints (P14, P23, P42, P63, P98), Ccl2 wildtype (+/+) and knockouts (-/-), irradiation (7Gy) and sham (0Gy) mice. All in all, a total of 1071 images! The dataset presented here is for an upcoming manuscript to be published. In vivo MRI scans were obtained using a 7-T MRI scanner (Bruker BioSpin, Ettlingen, Germany) equipped with four cryocoils for simultaneous imaging of four mice. The scans were performed with the following settings: T1-weighted, 3D-gradient echo sequence, 75μm isotropic resolution, TR=26ms, TE=8.25ms, flip angle=26°, field of view=25×22×22mm, and matrix size=334×294×294. All structural MR images are stored in images.tar.gz. Images were segmented and registered using an automated pipeline which are stored in labels.tar.gz. The consensus average and labels are final_average.mnc and final_labels.mnc, respectively. Extracted structure volumes alongside the metadata are included in df_micevolumes.csv. Structural MRIs are in MINC format and the readme.txt provides further information on this dataset. The authors express their sincere gratitude for the research funding recieved from the Canadian Institutes of Health Research (158622, 168037) and the Ontario Institute for Cancer Research (IA-024) with funding from the Government of Ontario and Restracomp from the SIckKids Research Training Centre. Code/Software MINChttps://www.bic.mni.mcgill.ca/ServicesSoftware/MINC RMINChttps://github.com/Mouse-Imaging-Centre/RMINC PydPiperhttps://github.com/Mouse-Imaging-Centre/pydpiper/tree/v2.0.19.1

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    ZENODO
    Dataset . 2024
    License: CC BY
    Data sources: ZENODO
    ZENODO
    Dataset . 2024
    License: CC BY
    Data sources: Datacite
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      ZENODO
      Dataset . 2024
      License: CC BY
      Data sources: ZENODO
      ZENODO
      Dataset . 2024
      License: CC BY
      Data sources: Datacite
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Hallquist, Michael; Hwang, Kai; Luna, Beatriz; Dombrovski, Alexandre;

    fMRI acquisition Neuroimaging data during the clock task were acquired in a Siemens Tim Trio 3T scanner for the original study and Siemens Tim Prisma 3T scanner for the replication study at the Magnetic Resonance Research Center, University of Pittsburgh. Due participant-dependent variation in response times on the task, each fMRI run varied in length from 3.15 to 5.87 minutes (M = 4.57 minutes, SD = 0.52). Functional imaging data for the original/replication study were acquired using a simultaneous multislice sequence sensitive to BOLD contrast, TR = 1.0/0.6s, TE = 30/27ms, flip angle = 55/45°, multiband acceleration factor = 5/5, voxel size = 2.3/3.1mm3. We also obtained a sagittal MPRAGE T1-weighted scan, voxel size = 1/1mm3, TR = 2.2/2.3s, TE = 3.58/3.35ms, GRAPPA 2/2x acceleration. The anatomical scan was used for coregistration and nonlinear transformation to functional and stereotaxic templates. We also acquired gradient echo fieldmap images (TEs = 4.93/4.47ms and 7.39/6.93ms) for each subject to mitigate inhomogeneity-related distortions in the functional MRI data. Preprocessing of fMRI data Anatomical scans were registered to the MNI152 template (82) using both affine (ANTS SyN) and nonlinear (FSL FNIRT) transformations. Functional images were preprocessed using tools from NiPy (83), AFNI (version 19.0.26) (84), and the FMRIB software library (FSL version 6.0.1) (85). First, slice timing and motion coregistration were performed simultaneously using a four-dimensional registration algorithm implemented in NiPy (86). Non-brain voxels were removed from functional images by masking voxels with low intensity and by the ROBEX brain extraction algorithm (87). We reduced distortion due to susceptibility artifacts using fieldmap correction implemented in FSL FUGUE. Participants’ functional images were aligned to their anatomical scan using the white matter segmentation of each image and a boundary-based registration algorithm (88), augmented by fieldmap unwarping coefficients. Given the low contrast between gray and white matter in echoplanar scans with fast repetition times, we first aligned functional scans to a single-band fMRI reference image with better contrast. The reference image was acquired using the same scanning parameters, but without multiband acceleration. Functional scans were then warped into MNI152 template space (2.3mm output resolution) in one step using the concatenation of functional-reference, fieldmap unwarping, reference-structural, and structural-MNI152 transforms. Images were spatially smoothed using a 5mm full-width at half maximum (FWHM) kernel using a nonlinear smoother implemented in FSL SUSAN. To reduce head motion artifacts, we then conducted an independent component analysis for each run using FSL MELODIC. The spatiotemporal components were then passed to a classification algorithm, ICA-AROMA, validated to identify and remove motion-related artifacts (89). Components identified as noise were regressed out of the data using FSL regfilt (non-aggressive regression approach). ICA-AROMA has performed very well in head-to-head comparisons of alternative strategies for reducing head motion artifacts (90). We then applied a .008 Hz temporal high-pass filter to remove slow-frequency signal changes (91); the same filter was applied to all regressors in GLM analyses. Finally, we renormalized each voxel time series to have a mean of 100 to provide similar scaling of voxelwise regression coefficients across runs and participants. Treatment of head motion In addition to mitigating head motion-related artifacts using ICA-AROMA, we excluded runs in which more than 10% of volumes had a framewise displacement (FD) of 0.9mm or greater, as well as runs in which head movement exceeded 5mm at any point in the acquisition. This led to the exclusion of 11 runs total, yielding 549 total usable runs across participants. Furthermore, in voxelwise GLMs, we included the mean time series from deep cerebral white matter and the ventricles, as well as first derivatives of these signals, as confound regressors (90). MEG Data acquisition MEG data were acquired using an Elekta Neuromag VectorView MEG system (Elekta Oy, Helsinki, Finland) in a three-layer magnetically shielded room. The system comprised of 306 sensors, with 204 planar gradiometers and 102 magnetometers. In this project we only included data from the gradiometers, as data from magnetometers added noise and had a different amplitude scale. MEG data were recorded continuously with a sampling rate of 1000 Hz. We measured head position relative to the MEG sensors throughout the recording period using 4 continuous head position indicators (cHPI) that continuously emit sinusoidal signals, and head movements were corrected offline during preprocessing. To monitor saccades and eye blinks, we used two bipolar electrode pairs to record vertical and horizontal electrooculogram (EOG). Preprocessing of MEG data Flat or noisy channels were identified with manual inspections, and all data preprocessed using the temporal signal space separation (TSSS) method (92, 93). TSSS suppresses environmental artifacts from outside the MEG helmet and performs head movement correction by aligning sensor-level data to a common reference (94). This realignment allowed sensor-level data to be pooled across subjects group analyses of sensor-space data. Cardiac and ocular artifacts were then removed using an independent component analysis by decomposing MEG sensor data into independent components (ICs) using the infomax algorithm (95). Each IC was then correlated with ECG and EOG recordings, and an IC was designated as an artifact if the absolute value of the correlation was at least three standard deviations higher than the mean of all correlations. The non-artifact ICs were projected back to the sensor space to reconstruct the signals for analysis. After preprocessing, data were epoched to the onset of feedback, with a window from -0.7 to 1.0 seconds. Trials with gradiometer peak-to-peak amplitudes exceeded 3000 fT/cm were excluded. Please note that the following processing step has NOT been applied to MEG data: "For each sensor, we computed the time-frequency decomposition of activity on each trial by convolving time-domain signals with Morlet wavelet, stepping from 2 to 40 Hz in logarithmic scale using 6 wavelet cycles". # Reward-based option competition in human dorsal stream and transition from stochastic exploration to exploitation in continuous space Behavioral, fMRI and MEG data. ## Description of the data and file structure * Directories and file within hallquist_etal_supplemental_data.zip: \######################################################## fig_1: behavioral data from the fMRI study trial_data_compact.RData - RData file with the following variables: $ dataset: study name $ id: participants's numeric id $ run: sequential number of the current 50-trial block, 1-8 $ trial: trial $ rewFunc: Contingency, "DEV","CEV","CEVR", "IEV" $ rt_csv: response time in seconds $ magnitude: expected reward magnitude $ probability: expected reward probability $ ev: expected reward value $ rt_vmax: response time with the highest learned value, as predicted by the SCEPTIC model $ score_csv: reward received \######################################################## fig_2: DAN parcellation, whole-brain statistical parametric maps (BOLD signal) entropy_change_wb_unthresholded_1mm.nii.gz: .nii file of the un-thresholded parametric entropy change map entropy_wb_unthresholded_1mm.nii.gz: .nii file of the un-thresholded parametric entropy map Schaefer_444_final_2009c_1.0mm.nii.gz: .nii file of the Schaefer et al.'s 400 parcellation in MNI 2009c space Schaefer2018_DAN_2009c_FINAL47.nii.gz: same, but only dorsal attention stream regions \######################################################## fig_3: deconvolved DAN BOLD signal, same parcellation as in fig_2 rt_aligned_deconvolved_bold.RData: RData file with the following variables: $ id: participants's numeric id $ run: sequential number of the current 50-trial block, 1-8 $ run_trial: trial within run (1:50), note the difference from the behavioral data file "trial" variable $ feedback_onset: onset of feedback, in seconds $ rewFunc: Contingency, "DEV","CEV","CEVR", "IEV" $ atlas_value: number of dorsal stream node as in Table S2 $ label: label of dorsal stream node as in Table S2 and Figure S2 $ decon_interp: deconvolved BOLD signal $ side: right ("R") or left ("L") \######################################################## fig_4: BOLD regional regression coefficients corresponding to entropy change maps in fig_2 \## entropy_change_betas.csv.gz: text file with the following variables: $ id: participant's numeric id $ atlas_value: number of dorsal stream node as in Table S2 $ x, y, z: MNI coordinates $ value: mean regional regression coefficient for entropy change \######################################################## fig_5: MEG time-frequency domain statistics for entropy change meg_time_frequency_entropy_change_ri.rds: .rds (R Data Serialization) file with the following variables: $ Time: time in seconds relative to feedback $ Freq: frequency, Hz $ estimate: regression coefficient, estimate $ std.error: regression coefficient, standard error $ statistic: test statistic $ df: degrees of freedom $ p.value: uncorrected p-value $ p_fdr: FDR-corrected p-value ## Code/Software SCEPTIC computational model: [10.5281/zenodo.1336285](https://zenodo.org/doi/10.5281/zenodo.1336285) Primates exploring and exploiting a continuous sensorimotor space rely on dynamic maps in the dorsal stream. Two complementary perspectives exist on how these maps encode rewards. Reinforcement learning models integrate rewards incrementally over time, efficiently resolving the exploration/exploitation dilemma. Working memory buffer models explain rapid plasticity of parietal maps but lack a plausible exploration/exploitation policy. The reinforcement learning model presented here unifies both accounts, enabling rapid, information-compressing map updates and efficient transition from exploration to exploitation. As predicted by our model, activity in human fronto-parietal dorsal stream regions, but not in MT+, tracks the number of competing options, as preferred options are selectively maintained on the map while spatiotemporally distant alternatives are compressed out. When valuable new options are uncovered, posterior beta1/alpha oscillations desynchronize within 0.4-0.7 s, consistent with option encoding by competing beta1-stabilized subpopulations. Altogether, outcomes matching locally cached reward representations rapidly update parietal maps, biasing choices toward often-sampled, rewarded options.

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ DRYAD; ZENODOarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    DRYAD; ZENODO
    Dataset . 2024
    License: CC 0
    Data sources: Datacite; ZENODO
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ DRYAD; ZENODOarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      DRYAD; ZENODO
      Dataset . 2024
      License: CC 0
      Data sources: Datacite; ZENODO
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Hayder, Amin;

    Sequencing data (fastq.gz) was processed using the 10X Genomics Space Ranger Count pipeline. L75027 = Mouse_Hippocampus_SD L75031 = Mouse_Hippocampus_ENR Outputs include a spatial folder containing outputs that capture the spatiality of the data and a filtered_feature_bc matrix folder containing the tissue-associated barcodes in MEX format. spatial/ aligned_fiducials.jpg: Aligned fiducials QC image detected_tissue_image.jpg: Detected tissue QC image scalefactors_json.json: Scale conversion factors for spot diameter and coordinates at various image resolutions tissue_hires_image.png: Downsampled full resolution image tissue_lowres_image.png: Full resolution image downsampled to 600 pixels on the longest dimension tissue_positions_list.csv: CSV containing spot barcode filtered_feature_bc_matrix.h5: Contains only tissue-associated barcodes in in HDF5 format filtered_feature_bc_matrix/ barcodes.tsv.gz: List of barcodes features.tsv.gz: List of features matrix.mtx.gz: Count matrix

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    ZENODO
    Dataset . 2024
    License: CC BY
    Data sources: ZENODO
    ZENODO
    Dataset . 2024
    License: CC BY
    Data sources: Datacite
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      ZENODO
      Dataset . 2024
      License: CC BY
      Data sources: ZENODO
      ZENODO
      Dataset . 2024
      License: CC BY
      Data sources: Datacite
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Hayder, Amin;

    Raw data (.brw) was recorded with BrainWave SW and detected LFP events stored in (.bxr) for SD and ENR groups. These extracellular recordings were obtained from acute hippocampal-cortical slices and were collected at 14KHz/electrode sampling frequency. 

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    ZENODO
    Dataset . 2024
    License: CC BY
    Data sources: ZENODO
    ZENODO
    Dataset . 2024
    License: CC BY
    Data sources: Datacite
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      ZENODO
      Dataset . 2024
      License: CC BY
      Data sources: ZENODO
      ZENODO
      Dataset . 2024
      License: CC BY
      Data sources: Datacite
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Bastiaansen, Wietske; Rousian, Melek; Koning, Anton; Niessen, Wiro; +3 Authors

    Early detection of (patho)physiological variation during prenatal neuro-development is challenging due to the limited knowledge of the normal physiological development of the embryonic and early fetal brain. To provide a detailed picture of normal embryonic and early fetal brain development, we created the 4D Human Embryonic Brain Atlas: a spatiotemporal atlas based on three-dimensional (3D) ultrasound images acquired between 8 and 12 weeks of gestational age. The 4D Human Embryonic Brain Atlas was created using a deep learning approach for groupwise image registration, which takes into account the rapid morphological changes during the first trimester. The 4D Human Embryonic Brain Atlas was created and validated using 831 3D ultrasound volumes from 402 subjects of the Rotterdam Periconceptional cohort. The 4D Human Embryonic Brain Atlas provides unique insight into the crucial early life period and has the potential to enhance the detection, prevention, and treatment of prenatal neurodevelopmental disorders. 

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    ZENODO
    Dataset . 2024
    License: CC BY NC
    Data sources: ZENODO
    ZENODO
    Dataset . 2024
    License: CC BY NC
    Data sources: Datacite
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      ZENODO
      Dataset . 2024
      License: CC BY NC
      Data sources: ZENODO
      ZENODO
      Dataset . 2024
      License: CC BY NC
      Data sources: Datacite
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Lettieri, Giada; Handjaras, Giacomo; Cappello, Elisa Morgana; Setti, Francesca; +9 Authors

    # Dissecting abstract, modality-specific, and experience-dependent coding of affect in the human brain ## Description of the data and file structure #### Behavioral data and code **behavioral/code/** -> all MATLAB functions needed to analyze the behavioral data (i.e., categorical and valence ratings collected during the auditory, visual, and multisensory experiments). **behavioral/data/categorical_audio/** -> 20 folders (one for each participant) storing the categorical annotations of emotion collected during the auditory experiment. The folder also includes subs_demographics.csv, which stores participants' demographics. In each participant's folder, there are 6 MATLAB workspace files (*_run0?.mat) storing ratings of the affective experience for each run. In each workspace, there is a structure called *ReMoTa_Output*, which contains all behavioral data and experiment details. Specifically: -*ReMoTa_Output.Date_Experiment* stores the date of the experiment. -*ReMoTa_Output.MovieFile* stores the path to the stimulus file. -*ReMoTa_Output.MovieHeightInDeg* is the height of the video in visual degrees. -*ReMoTa_Output.MovieWidthInDeg* is the width of the video in visual degrees. -*ReMoTa_Output.Ratings* stores the emotion annotations provided by the participant. -*ReMoTa_Output.RatingsSamplingFrequency* the sampling frequency of emotion annotations in Hz. -*ReMoTa_Output.ResponseTime* is the timing of the button press (not very useful). -*ReMoTa_Output.StepsInIntensity* is the number of levels of intensity that could be specified for each emotional instance (if == 1, then only presence/absence). -*ReMoTa_Output.Subject* is the participant id. -*ReMoTa_Output.TaggingCategories* stores the labels (in Italian) of the emotion categories used in the experiment. the order reflects the rows in *ReMoTa_Output.Ratings*. **behavioral/data/categorical_audiovideo/** -> 22 folders (one for each participant) storing the categorical annotations of emotion collected during the multisensory experiment. The folder also includes subs_demographics.csv, which stores participants' demographics. In each participant's folder, there are 6 MATLAB workspace files (*_run0?.mat) storing ratings of the affective experience for each run. In each workspace, there is a structure called *ReMoTa_Output*, which contains all behavioral data and experiment details. The organization of the data structure is identical across conditions (please refer to the previous section for further details). **behavioral/data/categorical_video/** -> 20 folders (one for each participant) storing the categorical annotations of emotion collected during the visual experiment. The folder also includes subs_demographics.csv, which stores participants' demographics. In each participant's folder, there are 6 MATLAB workspace files (*_run0?.mat) storing ratings of the affective experience for each run. In each workspace, there is a structure called *ReMoTa_Output*, which contains all behavioral data and experiment details. The organization of the data structure is identical across conditions (please refer to the previous section for further details). **behavioral/data/valence_audio/** -> 20 folders (one for each participant) storing the valence ratings collected during the auditory experiment. The folder also includes subs_demographics.csv, which stores participants' demographics. In each participant's folder, there are 6 MATLAB workspace files (*_run0?.mat) storing ratings of the affective experience for each run.In each workspace, there is a structure called *ReMoTa_Output*, which contains all behavioral data and experiment details. The organization of the data structure is identical across conditions (please refer to the previous section for further details). **behavioral/data/valence_audiovideo/** -> 21 folders (one for each participant) storing the valence ratings collected during the multisensory experiment. The folder also includes subs_demographics.csv, which stores participants' demographics. In each participant's folder, there are 6 MATLAB workspace files (*_run0?.mat) storing ratings of the affective experience for each run. In each workspace, there is a structure called *ReMoTa_Output*, which contains all behavioral data and experiment details. The organization of the data structure is identical across conditions (please refer to the previous section for further details). **behavioral/data/valence_video/** -> 21 folders (one for each participant) storing the valence ratings collected during the visual experiment. The folder also includes subs_demographics.csv, which stores participants' demographics. In each participant's folder, there are 6 MATLAB workspace files (*_run0?.mat) storing ratings of the affective experience for each run. In each workspace, there is a structure called *ReMoTa_Output*, which contains all behavioral data and experiment details. The organization of the data structure is identical across conditions (please refer to the previous section for further details). #### Neuroimaging data and code **fmri/code/** -> fmri_preprocessing.sh is the bash script to preprocess raw fMRI data. Requires AFNI, ANTs, and FSL. All other *.m files are MATLAB functions needed to analyze the fMRI data (e.g., conjunction_univariate.m, mvpa_classification.m). **fmri/data/audio/** -> this folder stores preprocessed fMRI data for 21 participants (????_sub-???_allruns-cleaned_reml2mni.nii.gz) collected during the auditory experiment. The filename indicates the participant's sensory experience (i.e., blind for congenitally blind individuals; ctrl for typically-developed people) and the id, which matches what we report in Table 2 (e.g., sub-033). In addition, this folder stores the single-participant results of the voxelwise encoding analysis (????_sub-???_allruns-cleaned_encoding_results_categorical_nn1.nii.gz), as well as the results of the non-parametric combination at the group level (????_audio_encoding_npc_fwe_nn1.nii.gz) and the comparison between the categorical and dimensional models in terms of fitting brain activity (????_audio_adjr2_cat-dim.nii.gz). **fmri/data/audiovideo/** -> this folder stores preprocessed fMRI data for 10 participants (ctrl_sub-???_allruns-cleaned_reml2mni.nii.gz) collected during the multisensory experiment. The filename indicates the participant's id, which matches what we report in Table 2 (e.g., sub-012). In addition, this folder stores the single-participant results of the voxelwise encoding analysis (ctrl_sub-???_allruns-cleaned_encoding_results_categorical_nn1.nii.gz), as well as the results of the non-parametric combination at the group level (ctrl_audiovideo_encoding_npc_fwe_nn1.nii.gz) and the comparison between the categorical and dimensional models in terms of fitting brain activity (ctrl_audiovideo_adjr2_cat-dim.nii.gz). **fmri/data/video/** -> this folder stores preprocessed fMRI data for 19 participants (????_sub-???_allruns-cleaned_reml2mni.nii.gz) collected during the visual experiment. The filename indicates the participant's sensory experience (i.e., deaf for congenitally deaf individuals; ctrl for typically-developed people) and the id, which matches what we report in Table 2 (e.g., sub-020). In addition, this folder stores the single-participant results of the voxelwise encoding analysis (????_sub-???_allruns-cleaned_encoding_results_categorical_nn1.nii.gz), as well as the results of the non-parametric combination at the group level (????_video_encoding_npc_fwe_nn1.nii.gz) and the comparison between the categorical and dimensional models in terms of fitting brain activity (????_video_adjr2_cat-dim.nii.gz). **fmri/data/gm_010_final.nii.gz** -> the mask used in voxelwise encoding analysis. **fmri/data/vmpfc_neurosynth_mask_nn1_gm_masked.nii.gz** -> a vmPFC mask obtained from neurosynth, which has been employed in testing the association between average activity of this region and the emotion model. **fmri/data/npc_overlap_conditions_groups_fwe_clust_nn1.nii.gz** -> a map showing the overlap between brain regions encoding the emotion model across groups and conditions. **fmri/data/npc_overlap_conditions_groups_fwe_clust_nn1_thr1_min20.nii.gz** -> a mask of the emotion network used to classify participants' sensory experience and stimulus modality. **fmri/data/npc_overlap_conditions_groups_fwe_clust_nn1_thr1_min20_mpfc.nii.gz** -> a mask of mPFC used to classify participants' sensory experience and stimulus modality. **fmri/data/npc_overlap_conditions_groups_fwe_clust_nn1_thr1_min20_rsts.nii.gz** -> a mask of right STS used to classify participants' sensory experience and stimulus modality. **fmri/data/npc_overlap_conditions_groups_fwe_clust_nn1_thr1_min20_lsts.nii.gz** -> a mask of left STS used to classify participants' sensory experience and stimulus modality. **fmri/data/mvpa_classifier_results_feature_relevance.nii.gz** -> a map showing feature relevance for classifying participants' sensory experience and the stimulus modality from the activity of regions encoding the emotion model. **fmri/data/avg_across_groups_and_conditions_mpfc_masked_zscore_significant_emotions.nii.gz** -> a map showing the average standardized fitting coefficients of emotions significantly classified from mPFC activity across groups and conditions. #### False positive rate code **fpr_simulation/** -> this folder stores the code to verify that our NPC method ensures a rate of false positive results is in line with what is expected based on the alpha level. **fpr_simulation.m** is the main script to run, **generate synthetic_fmri.m** is a function that generates fake fMRI timeseries, **categorical_encoding_matrix_tpshuffle.mat** stores the emotion model used in voxelwise encoding analyses. the workspace contains two variables: *categorical_encoding_matrix*, the actual encoding matrix, and *categorical_encoding_matrix_null*, the null encoding matrix generated by shuffling the timepoints before convolution. **R2_audiovideo_subjects.mat** contains the fitting values of the emotion model across all voxels and participants in the multisensory condition (variable: *timeserie_R2*) and their xyz coordinates in MNI space (variable: *coordinates*). **npc_fpr_1000_exp_10_part_80_vox_1614_tps_16_pred_2001_perms.mat** is the workspace storing the results of the simulation for 1,000 experiments, 10 participants, 80 voxels, 1,614 timepoints, 16 predictors, and 2000 permutations. relevant variables in the workspace are: -*all_fwc_pvalues* is a 2D matrix (experiments by voxels) storing the familywise corrected pvalues for each voxel and experiment. -*categorical_encoding_matrix* is a 3D matrix (timepoints by predictors by permutations) storing the encoding model and its permuted versions based on timepoint shuffling. -*experiment_data* is a 3D matrix (timepoints by voxels by participants) storing fmri data from one simulated experiment. a new matrix of simulated data is generated for each experiment. -*fpr* is the false positive rate obtained from the simulation after correction for multiple comparisons. -*fpr_ci* is the 95th confidence interval of the false positive rate -*rsquared* is a 3D matrix (participants by voxels by permutations) storing the coefficient of determination obtained by fitting the encoding model to fmri data in one simulated experiment. a new matrix of coefficients is obtained from each experiment. -*significant_experiments* is a logical column array with the number of elements equal to the number of simulated experiments. at each position, the value is either 0 (false) if all voxels do not pass the familywise corrected threshold (*fwc_alpha*) or 1 (true) if at least one voxel reaches statistical significance (false positive experiment). **npc_fpr_1000_exp_10_part_80_vox_1614_tps_16_pred_2001_perms_shuffle_after_convolution.mat** is the workspace storing the results of the same simulation with the only exception that timepoint shuffling is applied after convolution (i.e., wrong method). therefore, the variables' names are identical to those in the **npc_fpr_1000_exp_10_part_80_vox_1614_tps_16_pred_2001_perms.mat** workspace. ## Experiment Info In the main folder, the file **experiment_info.mat** stores information about the fmri and behavioral experiments most scripts require. These details are the tagging categories (i.e., *experiment_info.emotion_categories*), the number of timepoints in the fmri acquisition for each run (i.e., *experiment_info.fmri_runs_duration*), the sampling frequency in the behavioral experiment (i.e., *experiment_info.behavioral_sampling_freq*), and the temporal resolution of the fmri acquisition in seconds (i.e., *experiment_info.fmri_tr_in_sec*). ## Code/Software In the main folder, the **behavioral_parent_script.m** and **fmri_parent_script.m** files contain examples to replicate the analyses reported in our work. For the code to run properly, you need **SPM12** () and **MATLAB Tools for NIfTI and ANALYZE image** () in the MATLAB path. Also, the **chaotic system toolbox** () and the functions *mpcdf.m*, *mpinv*, and *wachter.m* () should be added to the MATLAB path. For exporting figures we use the **export_fig** MATLAB package () and colorbrewer colormaps (). All analyses are implemented in MATLAB R2022a. Description of scripts and functions for the **analysis of behavioral data**: -*behavioral_parent_script.m* ``` running the behavioral_parent_script.m file will result in all behavioral analyses being performed and individual figures being created. Specifically, the script evaluates the most frequent emotions for each condition. It also generates emotion-by-emotion representational dissimilarity matrices (RDMs) from categorical annotations collected under the three experimental conditions and affective norms reported in Warriner et al., 2013. Then, it computes Kendall's tau correlation between all pairings of RDMs to assess the similarity of emotions in a latent space across conditions and between behavioral annotations and affective norms. Lastly, it computes principal components (PCs) from categorical ratings and estimates the correlation between PCs and behavioral valence ratings. ``` -*behavioral/code/analyze_emotion_ratings.m* ``` the analyze_emotion_ratings.m function takes as input the parent directory storing categorical annotation of emotions from multiple participants and produces single-participant and group-level timeseries (i.e., the number of participants reporting an emotional instance for each timepoint) of emotion ratings downsampled to fmri resolution. usage: [ratings,downsampled_ratings,aggregated_ratings] = analyze_emotion_ratings('behavioral/data/categorical_audio/',experiment_info.fmri_runs_duration,experiment_info.behavioral_sampling_freq,experiment_info.fmri_tr_in_sec) ------- INPUT: behavioral/data/categorical_audio/: path to behavioral data experiment_info.fmri_runs_duration: duration of fmri runs experiment_info.behavioral_sampling_freq: sampling frequency of behavioral data experiment_info.fmri_tr_in_sec: repetition time of fmri scan ------- OUTPUT: ratings: each participant's emotion annotations not downsampled to the fmri temporal resolution downsampled_ratings: each participant's emotion annotations downsampled to the fmri temporal resolution aggregated_ratings: group-level emotion annotations downsampled to the fmri temporal resolution ``` -*behavioral/code/analyze_valence_ratings.m* ``` the analyze_valence_ratings.m function takes as input the parent directory storing valence scores from multiple participants and produces single-participant and group-level timeseries (i.e., the median valence for each timepoint) of valence downsampled to fmri resolution. usage: [ratings,downsampled_ratings,aggregated_ratings] = analyze_valence_ratings('behavioral/data/valence_audio/',experiment_info.fmri_runs_duration,experiment_info.behavioral_sampling_freq,experiment_info.fmri_tr_in_sec) ------- INPUT: behavioral/data/valence_audio/: path to behavioral data experiment_info.fmri_runs_duration: duration of fmri runs experiment_info.behavioral_sampling_freq: sampling frequency of behavioral data experiment_info.fmri_tr_in_sec: repetition time of fmri scan ------- OUTPUT: ratings: each participant's valence ratings not downsampled to the fmri temporal resolution downsampled_ratings: each participant's valence ratings downsampled to the fmri temporal resolution aggregated_ratings: group-level valence ratings downsampled to the fmri temporal resolution ``` -*behavioral/code/jaccard_agreement.m* ``` this script computes the agreement between participants in emotion annotations and its significance. Agreement is computed using Jaccard coefficients and significance is established through permutation testing. The script also produces Figures reported in the Supplementary materials of our paper. ``` -*behavioral/code/prepare_encoding_regressors_new.m* ``` the prepare_encoding_regressors_new.m function creates the encoding model and its null version for voxelwise encoding analyses. This version of the script also computes principal components from categorical ratings of emotion and set the optimal number of PCs using the Wachter method. Please note that the prepare_encoding_regressors.m function is an older version of the function that does not include the Wachter method. [encoding_matrix,encoding_matrix_null,encoding_matrix_pc,encoding_matrix_pc_null,pc_coefficients,explained_variance,n_optimal_components] = prepare_encoding_regressors_new(aggregated_ratings,agreement_threshold,hrf_convolution,hrf_parameters,fmri_tr_in_sec,add_intercept,scaling,n_perm,random_seed,do_pc) ------- INPUT: aggregated_ratings: group-level timeseries of emotion ratings (typically the output of analyze_emotion_ratings.m) agreement_threshold: the minimum number of participants reporting an emotional instance in the same timepoint (e.g., 2) hrf_convolution: if set to 'yes' then emotion ratings are convolved using a hemodynamic response function. hrf_parameters: the parameters for hrf convolution (e.g., [6 16 1 1 6 0 32] the standard in SPM12). fmri_tr_in_sec: the temporal resolution of the fmri acquisition. add_intercept: if set to 'yes' the intercept is added to the encoding model. scaling: if set to 'yes' the encoding matrix is scaled based on overall maximum agreement across subjects. n_perm: number of permutations of the timepoints under the null hypothesis (e.g., 2000). random_seed: the randomization seed for reproducibility (e.g., 15012018). do_pc: if set to 'yes' the function computes also principal components and determines the optimal number of PCs using the wachter method. ------- OUTPUT: encoding_matrix: the encoding matrix based on categorical ratings of emotion. encoding_matrix_null: the null encoding matrix obtained from timepoint shuffling of the original encoding matrix. encoding_matrix_pc: the encoding matrix based on principal components (i.e., dimensional model). encoding_matrix_pc_null: the null encoding matrix based on principal components. pc_coefficients: the coefficients of PCs. explained_variance: the variance explained by each PC. n_optimal_components: the optimal number of components according to the wachter method. ``` Description of scripts and functions for the **analysis of fmri data**: -*fmri_parent_script.m* ``` running the fmri_parent_script.m file will result in all fmri analyses being performed. Specifically, it will perform voxelwise encoding analysis at the single-participant level and will estimate the group-level significance using a non-parametric combination approach. Univariate conjunction analyses are also performed on group-level results obtained from all groups and conditions. In addition, the script also provides results for univariate contrasts between groups and conditions. Running the script, multivoxel pattern classification of participants' sensory experience and stimulus modality is also performed, as well as crossdecoding of valence from regions encoding the emotion model. ``` -*fmri/code/voxelwise_encoding_cluster_corr_new.m* ``` the voxelwise_encoding_cluster_corr_new.m function performs voxelwise encoding at the single-participant level usage: voxelwise_encoding_results = voxelwise_encoding_cluster_corr_new(encoding_matrix,encoding_matrix_null,p_forming_thr,nn_type,fwe_threshold,fmri_parent_directory,input_files,output_suffix,fmri_mask_file,demean_fmri,n_cpus,save_to_disk) ------- INPUT: encoding_matrix: the encoding matrix, typically the output of the prepare_encoding_regressors_new.m function. encoding_matrix_null: permuted versions of the encoding matrix also coming from prepare_encoding_regressors_new.m function. p_forming_thr: the cluster defining threshold (e.g., 0.001). nn_type: the type of connection to determine a cluster. nn_type = 1 means connected if faces touch; nn_type = 2 means connected if faces or edges touch; nn_type = 3 means connected if faces, edges or corners touch. fwe_threshold: the familywise corrected threshold (e.g., 0.05). fmri_parent_directory: the parent directory storing preprocessed single-participant fmri data. input_files: the name of nifti files (e.g., 'ctrl_sub-*_allruns-cleaned_reml2mni.nii'). output_suffix: the suffix of the output filename (e.g., '_encoding_results_categorical_nn1'). fmri_mask_file: a mask to limit the search for significance (e.g., 'fmri/data/gm_010_final.nii'). demean_fmri: if 'yes' mean center the fmri signal. n_cpus: number of cpus for parallel computing. save_to_disk: if 'yes' voxelwise encoding results are saved to disk. ------- OUTPUT: voxelwise_encoding_results: a matrix storing voxelwise encoding results for each voxel. ``` -*fmri/code/voxelwise_encoding_group_analysis_npc.m* ``` the voxelwise_encoding_group_analysis_npc.m function computes group results for categorical ratings using non-parametric combination. usage: voxelwise_group_results = voxelwise_encoding_group_analysis_npc(parent_dir,single_sub_matfiles,fmri_mask_file,output_filename,fwe_threshold,cluster_forming_thr,nn_type) ------- INPUT: parent_dir: the folder with MATLAB workspaces storing the results of the voxelwise encoding analysis for each participant. Typically the results of the voxelwise_encoding_cluster_corr_new.m function (e.g., 'fmri/data/audio/'). single_sub_matfiles: the prefix of single-participant MATLAB workspaces (e.g., 'ctrl_*_encoding_results_categorical_nn1.mat'). fmri_mask_file: a mask to limit the search for significance (e.g., 'fmri/data/gm_010_final.nii'). output_filename: the suffix of the output filename (e.g., 'ctrl_audio_encoding_npc_fwe_nn1.nii'). fwe_threshold: the familywise corrected threshold (e.g., 0.05). cluster_forming_thr: the cluster defining threshold (e.g., 0.001). nn_type: the type of connection to determine a cluster. ------- OUTPUT: voxelwise_group_results: a matrix storing the family-wise corrected pvalue for each voxel. ``` -*fmri/code/conjunction_univariate.m* ``` the conjunction_univariate.m function performs conjunctions between group-level results obtained from non-parametric combination. please refer to equations e-h in the manuscript for further details. usage: conjunction_results = conjunction_univariate(path_to_univariate_results,save_to_disk) ------- INPUT: path_to_univariate_results: specify the path pointing to all group-level voxelwise encoding results (e.g., 'fmri/data/*/*_encoding_npc_fwe_nn1.nii'). save_to_disk: if 'yes' conjunction results are saved to disk. ------- OUTPUT: conjunction_results: a structure storing results of univariate conjunction analyses. ``` -*fmri/code/voxelwise_group_comparison.m* ``` the voxelwise_group_comparison.m function compares the fitting obtained for the full emotion model between conditions and/or groups. usage: results = voxelwise_group_comparison(condition_a,condition_b,group_a,group_b,fmri_mask_file,nn_type,n_perm,cluster_forming_thr,number_of_cpus,save_to_disk) ------- INPUT: condition_a: the experimental condition of the first sample (e.g., audio). condition_b: the experimental condition of the second sample (e.g., audio). group_a: the sensory experience of the first sample (e.g., ctrl). group_b: the sensory experience of the second sample (e.g., blind). fmri_mask_file: a mask to limit the search for significance (e.g., 'fmri/data/gm_010_final.nii'). nn_type: the type of connection to determine a cluster. n_perm: number of permutations for establishing statistical significance (e.g., 2000). cluster_forming_thr: the cluster defining threshold (e.g., 0.001). number_of_cpus: number of cpus for parallel computing. save_to_disk: if 'yes' results are saved to disk. ------- OUTPUT: results: a structure storing results of univariate contrasts. ``` -*fmri/code/mvpa_classification.m* ``` the mvpa_classification.m function is used to classify participants sensory experience and the modality through which the emotion elicitation paradigm was administered from regions encoding the emotion model. we use a svm classifier and f1score as performance metric. usage: mvpa_classifier_output = mvpa_classification(parent_dir, nn_type, roi_file, n_folds, n_features, n_perms, performance_metric, save_to_disk, output_filename) ------- INPUT: parent_dir: the directory storing the single-participant workspaces obtained from the voxelwise_encoding_cluster_corr_new.m (e.g., 'fmri/data'). nn_type: the connection type used to define a cluster in single-participant analyses (e.g., 'nn1'). roi_file: a mask to determine the voxels used to perform the classification (e.g., 'fmri/data/npc_overlap_conditions_groups_fwe_clust_nn1_thr1_min20.nii'). n_folds: number of folds for the cross-validation procedure (e.g., 5). n_features: number of features (i.e., voxels) that are used in the classification analysis (e.g., 1000). n_perms: number of permutations for establishing the statistical significance of the classification. (e.g., 2000). performance_metric: the metric used to evaluate classifier performance (e.g., 'f1score'). save_to_disk: if 'yes' results are saved to disk. output_filename: a filename for the results (e.g., 'fmri/data/mvpa_classifier_results'). ------- OUTPUT: mvpa_classifier_output: a structure storing results of multivariate classification. ``` -*fmri/code/crossdecoding_ridge.m* ``` the crossdecoding_ridge.m function is employed to crossdecode valence from regions significantly associated to the emotion model. this to test whether some brain area map valence in a supramodal manner. one can explore different masks to assess the spatial specificity of results (e.g., entire network encoding the emotion model, mpfc roi). usage: crossdecoding_results = crossdecoding_ridge(fmri_data_dir, file_prefix, mask_file, random_seed, valence_data_dir, experiment_info, ridge_values, n_perm, save_output) ------- INPUT: fmri_data_dir: the directory storing preprocessed single-participant fmri data (e.g., 'fmri/data'). file_prefix: the prefix of single-participant nifti files (e.g., '_reml2mni.nii'). mask_file: a mask to determine the voxels used to crossdecode valence (e.g., 'fmri/data/npc_overlap_conditions_groups_fwe_clust_nn1_thr1_min20.nii'). random_seed: the randomization seed for reproducibility (e.g., 14051983). valence_data_dir: the parent directory storing behavioral ratings of valence (e.g., 'behavioral/data'). experiment_info: the variable containing experiment details (this is the structure stored in the experiment_info.mat workspace). ridge_values: the penalization values to be tested in the crossvalidation procedure (e.g., logspace(-3,2,1000)). n_perm: number of permutations for establishing the statistical significance (e.g., 2000). save_output: if 'yes' results are saved to disk. ------- OUTPUT: crossdecoding_results: a structure storing the results of the crossdecoding of valence. ``` -*fmri/code/emotion_decoding_and_coefficients_similarity_in_mpfc.m* ``` the emotion_decoding_and_coefficients_similarity_in_mpfc.m script performs the decoding of emotional instances from mpfc regression coefficients. ``` -*fmri/code/multiclass_classifier_performance.m* ``` the multiclass_classifier_performance.m function is used to estimate performance metrics in the context of multiclass classification usage: [macro_metrics, weighted_metrics, micro_metrics, single_class_metrics, classifier_errors] = multiclass_classifier_performance(confusion_matrix) ------- INPUT: confusion_matrix: a confusion matrix resulting from a classification procedure. ------- OUTPUT: macro_metrics: stores - in the following order - the macro accuracy, the macro precision, the macro recall and the macro f1 score for all the evaluated confusion matrices. weighted_metrics: stores - in the following order - the weighted average accuracy, precision, recall and f1 score for all the evaluated confusion matrices. the averages are weighted by the number of elements in each class. micro_metrics: in multiclass classification micro precision, recall and f1 score are the same number. this is what micro_metrics stores for each evaluated confusion matrix. single_class_metrics: stores - in the following order - the accuracy, precision, recall, and f1 score of each class and for all the evaluated confusion matrices. classifier_errors: stores - in the following order - the number of true positives, true negatives, false positives and false negatives of each class and for all the evaluated confusion matrices ``` -*fmri/code/neurosynth_vmpfc.m* ``` the neurosynth_vmpfc.m script estimates the relationship between average activity of vmpfc and the emotion model. ``` Emotion and perception are tightly intertwined, as affective experiences often arise from the appraisal of sensory information. Nonetheless, whether the brain encodes emotional instances using a sensory-specific code or in a more abstract manner is unclear. Here, we answer this question by measuring the association between emotion ratings collected during a unisensory or multisensory presentation of a full-length movie and brain activity recorded in typically-developed, congenitally blind and congenitally deaf participants. Emotional instances are encoded in a vast network encompassing sensory, prefrontal, and temporal cortices. Within this network, the ventromedial prefrontal cortex stores a categorical representation of emotion independent of modality and previous sensory experience, and the posterior superior temporal cortex maps the valence dimension using an abstract code. Sensory experience more than modality impacts how the brain organizes emotional information outside supramodal regions, suggesting the existence of a scaffold for the representation of emotional states where sensory inputs during development shape its functioning.

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ DRYAD; ZENODOarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    DRYAD; ZENODO
    Dataset . 2024
    License: CC 0
    Data sources: Datacite; ZENODO
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ DRYAD; ZENODOarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      DRYAD; ZENODO
      Dataset . 2024
      License: CC 0
      Data sources: Datacite; ZENODO
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Rumbeiha, Wilson;

    # Metabolomic profiles of acute and chronic ambient hydrogen sulfide exposure in a mouse model \# General information \## This README file was generated on 20204-01-04 by Dongsuk Kim. \## Author Information A. Principal Investigator Contact Information Name: Wilson Rumbeiha Institution: University of California at Davis Email: B. Associate or Co-investigator Contact Information Name: Dongsuk Kim Institution: University of California at Davis Email: \# Dataset contents: This data set contains 3 normalized metabolomics data in brainstem following acute and subchronic hydrogen sulfide exposures. \# SHARING/ACCESS INFORMATION 1\. Licenses/restrictions placed on the data: CC0 1.0 Universal (CC0 1.0) Public Domain 2\. Links to publications that cite or use the data: Dong-Suk Kim, Cristina MS Maldonado, Cecilia Giulivi, and Wilson K Rumbeiha (2024). Metabolomic signatures of brainstem in mice following acute and chronic hydrogen sulfide exposure. 3\. Links to other publicly accessible locations of the data: None 4\. Links/relationships to ancillary data sets: None 5\. Was data derived from another source? No 6\. Recommended citation for this dataset: Dong-Suk Kim, Cristina MS Maldonado, Cecilia Giulivi, and Wilson K Rumbeiha (2024). Data from: Metabolomic signatures of brainstem in mice following acute and chronic hydrogen sulfide exposure. Dryad Digital Repository. \# Experimental procedures: \## Exposure paradigm Hydrogen sulfide (H2S) is an environmental toxicant of health concern following acute or chronic human exposures. Male 6-8 week-old C57BL/6J mice were exposed by whole-body inhalation to 1000 ppm H2S for 45 min and euthanized at 5 min and 72 h for acute exposure. For subchronic study, mice were exposed to 5 ppm H2S 2 h/day, 5 days/week for 5 weeks. The brainstem was removed for metabolomic analysis. \## Metabolomics analysis The metabolomics analyses consisted of three assays, (1) primary metabolism by GC-TOF MS, (2) biogenic amines (hydrophilic compounds) by HILIC-MS/MS and (3) lipidomics by RPLC-MS/MS. Metabolomics were performed in West Coast Metabolomics Center, University of California at Davis, CA, USA. \# Results: 348, 311, and 565 known metabolites were detected and analyzed by primary metabolism, biogenic amines, and lipidomic metabolomics assays. 33, 19, and 46 metabolites were increased at 5 min and 72 h post acute H2S exposures and subchronic ambient H2S exposures, respectively, compared to room air control group. 22, 17, and 32 metabolites were decreased at 5 min and 72 h post acute H2S exposures and subchronic ambient H2S exposures, respectively, compared to room air control group. Acute H2S exposure decreased excitatory neurotransmitters aspartate and glutamate concentrations while the inhibitory neurotransmitter serotonin was increased. Glutamate and serotonin were also decreased after ambient H2S exposure. Branched-chain amino acids, fructose, and glucose were increased by acute H2S exposure. In ambient H2S exposure, glucose was decreased while MUFAs, PUFAs, inosine, and hypoxanthine were increased. Collectively, these results provide im-portant mechanistic clues of acute and subchronic ambient H2S poisonings and show that H2S alters neurotransmission homeostasis. \# Description of the data and file structure 1 PA data.txt contains normalized biogenic amines metabolomics data by biogenic amines assays. 2 PM data.txt contains normalized primary metabolisom metabolomics data by biogenic amines assays. 3 Lipid data.txt contains normalized lipidomic metabolomics data by biogenic amines assays. \## DATA-SPECIFIC INFORMATION FOR: PA data.txt 1\. number of headers: 36 2\. number of data (Identified metabolites):309 3\. Header list: identifier: Index number of identified metabolite Metabolite name: Common name of identified metabolite Species: Chemical ionization mode InChiKey: International Chemical Identifier MSI level: Metabolomics Standards Initiative m/z: Mass-to-charge ratio RT: Retention time (minutes) A - 1: Peak height of Room air normal control group for 1000 ppm H2S exposure - 1 A - 2: Peak height of Room air normal control group for 1000 ppm H2S exposure - 2 A - 3: Peak height of Room air normal control group for 1000 ppm H2S exposure - 3 A - 4: Peak height of Room air normal control group for 1000 ppm H2S exposure - 4 A - 5: Peak height of Room air normal control group for 1000 ppm H2S exposure - 5 B - 1: Peak height of 5 min post 1000 ppm H2S exposure - 1 B - 2: Peak height of 5 min post 1000 ppm H2S exposure - 2 B - 3: Peak height of 5 min post 1000 ppm H2S exposure - 3 B - 4: Peak height of 5 min post 1000 ppm H2S exposure - 4 C - 1: Peak height of 72 h post 1000 ppm H2S exposure - 1 C - 2: Peak height of 72 h post 1000 ppm H2S exposure - 2 C - 3: Peak height of 72 h post 1000 ppm H2S exposure - 3 C - 4: Peak height of 72 h post 1000 ppm H2S exposure - 4 C - 5: Peak height of 72 h post 1000 ppm H2S exposure - 5 D - 1: Peak height of Room air normal control group for 5 ppm H2S exposure - 1 D - 2: Peak height of Room air normal control group for 5 ppm H2S exposure - 2 D - 3: Peak height of Room air normal control group for 5 ppm H2S exposure - 3 D - 4: Peak height of Room air normal control group for 5 ppm H2S exposure - 4 E - 1: Peak height of 5 ppm H2S exposure - 1 E - 2: Peak height of 5 ppm H2S exposure - 2 E - 3: Peak height of 5 ppm H2S exposure - 3 E - 4: Peak height of 5 ppm H2S exposure - 4 E - 5: Peak height of 5 ppm H2S exposure - 4 MtdBlank001: Peak height of Blank - 1 MtdBlank002: Peak height of Blank - 2 MtdBlank003: Peak height of Blank - 3 PoolQC001: Peak height of Pooled quality control 1 PoolQC002: Peak height of Pooled quality control 2 PoolQC003: Peak height of Pooled quality control 3 4\. Abbreviation: na: Non-identified Inchikey \## DATA-SPECIFIC INFORMATION FOR: PM data.txt 1\. number of headers: 32 2\. number of data (Identified metabolites): 126 3\. Header list: Metabolite name: Name of identified metabolite ret.index: Retention time (minutes) quant mz: Mass-to-charge ratio mass spec: Mass spectra PubChem: ID of PubChem for the specific metabolite KEGG: ID of Kyoto Encyclopedia of Genes and Genomes InChI Key: International Chemical Identifier A - 1: Peak height of Room air normal control group for 1000 ppm H2S exposure - 1 A - 2: Peak height of Room air normal control group for 1000 ppm H2S exposure - 2 A - 3: Peak height of Room air normal control group for 1000 ppm H2S exposure - 3 A - 4: Peak height of Room air normal control group for 1000 ppm H2S exposure - 4 A - 5: Peak height of Room air normal control group for 1000 ppm H2S exposure - 5 B - 1: Peak height of 5 min post 1000 ppm H2S exposure - 1 B - 2: Peak height of 5 min post 1000 ppm H2S exposure - 2 B - 3: Peak height of 5 min post 1000 ppm H2S exposure - 3 B - 4: Peak height of 5 min post 1000 ppm H2S exposure - 4 C - 1: Peak height of 72 h post 1000 ppm H2S exposure - 1 C - 2: Peak height of 72 h post 1000 ppm H2S exposure - 2 C - 3: Peak height of 72 h post 1000 ppm H2S exposure - 3 C - 4: Peak height of 72 h post 1000 ppm H2S exposure - 4 C - 5: Peak height of 72 h post 1000 ppm H2S exposure - 5 D - 1: Peak height of Room air normal control group for 5 ppm H2S exposure - 1 D - 2: Peak height of Room air normal control group for 5 ppm H2S exposure - 2 D - 3: Peak height of Room air normal control group for 5 ppm H2S exposure - 3 D - 4: Peak height of Room air normal control group for 5 ppm H2S exposure - 4 E - 1: Peak height of 5 ppm H2S exposure - 1 E - 2: Peak height of 5 ppm H2S exposure - 2 E - 3: Peak height of 5 ppm H2S exposure - 3 E - 4: Peak height of 5 ppm H2S exposure - 4 E - 5: Peak height of 5 ppm H2S exposure - 4 pool_001: Peak height of Pooled quality control 1 pool_002: Peak height of Pooled quality control 2 4\. Abbreviation: na: Non-identified Pubchem or KEGG ID \## DATA-SPECIFIC INFORMATION FOR: Lipid data.txt 1\. number of headers: 38 2\. number of data (Identified metabolites): 565 3\. Header list: identifier: ID of identified metabolite name: Name of identified metabolite ion species: Ion species InChiKey: International Chemical Identifier m/z: Mass-to-charge ratio ESI mode: ESI mode ret.time: Retention time (minutes) A - 1: Peak height of Room air normal control group for 1000 ppm H2S exposure - 1 A - 2: Peak height of Room air normal control group for 1000 ppm H2S exposure - 2 A - 3: Peak height of Room air normal control group for 1000 ppm H2S exposure - 3 A - 4: Peak height of Room air normal control group for 1000 ppm H2S exposure - 4 A - 5: Peak height of Room air normal control group for 1000 ppm H2S exposure - 5 B - 1: Peak height of 5 min post 1000 ppm H2S exposure - 1 B - 2: Peak height of 5 min post 1000 ppm H2S exposure - 2 B - 3: Peak height of 5 min post 1000 ppm H2S exposure - 3 B - 4: Peak height of 5 min post 1000 ppm H2S exposure - 4 C - 1: Peak height of 72 h post 1000 ppm H2S exposure - 1 C - 2: Peak height of 72 h post 1000 ppm H2S exposure - 2 C - 3: Peak height of 72 h post 1000 ppm H2S exposure - 3 C - 4: Peak height of 72 h post 1000 ppm H2S exposure - 4 C - 5: Peak height of 72 h post 1000 ppm H2S exposure - 5 D - 1: Peak height of Room air normal control group for 5 ppm H2S exposure - 1 D - 2: Peak height of Room air normal control group for 5 ppm H2S exposure - 2 D - 3: Peak height of Room air normal control group for 5 ppm H2S exposure - 3 D - 4: Peak height of Room air normal control group for 5 ppm H2S exposure - 4 E - 1: Peak height of 5 ppm H2S exposure - 1 E - 2: Peak height of 5 ppm H2S exposure - 2 E - 3: Peak height of 5 ppm H2S exposure - 3 E - 4: Peak height of 5 ppm H2S exposure - 4 E - 5: Peak height of 5 ppm H2S exposure - 4 PoolQC001: Peak height of Pooled quality control 1 PoolQC002: Peak height of Pooled quality control 2 PoolQC003: Peak height of Pooled quality control 3 PoolQC004: Peak height of Pooled quality control 4 MtdBlank001: Peak height of Blank - 1 MtdBlank002: Peak height of Blank - 2 MtdBlank003: Peak height of Blank - 3 MtdBlank004: Peak height of Blank - 4 Missing data code: na Hydrogen sulfide (H2S) is an environmental toxicant of health concern following acute or chronic human exposures. Male 6-8 week-old C57BL/6J mice were exposed by whole-body inhalation to 1000 ppm H2S for 45 min and euthanized at 5 min and 72 h for acute exposure. For subchronic study, mice were exposed to 5 ppm H2S 2 h/day, 5 days/week for 5 weeks. The brainstem was removed for metabolomic analysis. The metabolomics analyses consisted of three assays, (1) primary metabolism by GC-TOF MS, (2) biogenic amines (hydrophilic compounds) by HILIC-MS/MS and (3) lipidomics by RPLC-MS/MS. Metabolomics were performed in West Coast Metabolomics Center, University of California at Davis, CA, USA. 348, 311, and 565 known metabolites were detected and analyzed by primary metabolism, biogenic amines, and lipidomic metabolomics assays. 33, 19, and 46 metabolites were increased at 5 min and 72 h post acute H2S exposures and subchronic ambient H2S exposures, respectively, compared to room air control group. 22, 17, and 32 metabolites were decreased at 5 min and 72 h post acute H2S exposures and subchronic ambient H2S exposures, respectively, compared to room air control group. Acute H2S exposure decreased excitatory neurotransmitters aspartate and glutamate concentrations while the inhibitory neurotransmitter serotonin was increased. Glutamate and serotonin were also decreased after ambient H2S exposure. Branched-chain amino acids, fructose, and glucose were increased by acute H2S exposure. In ambient H2S exposure, glucose was decreased while MUFAs, PUFAs, inosine, and hypoxanthine were increased. Collectively, these results provide important mechanistic clues of acute and subchronic ambient H2S poisonings and show that H2S alters neurotransmission homeostasis. Exposure paradigmHydrogen sulfide (H2S) is an environmental toxicant of health concern following acute or chronic human exposures. Male 6-8 week-old C57BL/6J mice were exposed by whole-body inhalation to 1000 ppm H2S for 45 min and euthanized at 5 min and 72 h for acute exposure. For subchronic study, mice were exposed to 5 ppm H2S 2 h/day, 5 days/week for 5 weeks. The brainstem was removed for metabolomic analysis. Metabolomics analysisThe metabolomics analyses consisted of three assays, (1) primary metabolism by GC-TOF MS, (2) biogenic amines (hydrophilic compounds) by HILIC-MS/MS and (3) lipidomics by RPLC-MS/MS. Metabolomics were performed in West Coast Metabolomics Center (WCMC), University of California at Davis, CA, USA. Raw data were normalized by the standard normalization in WCMC. 

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ DRYAD; ZENODOarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    DRYAD; ZENODO
    Dataset . 2024
    License: CC 0
    Data sources: Datacite; ZENODO
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ DRYAD; ZENODOarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      DRYAD; ZENODO
      Dataset . 2024
      License: CC 0
      Data sources: Datacite; ZENODO
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Shan, Zack;

    We provided the dataset of pre-processed anatomic and functional brain MR images from 10 healthy controls. The dataset can be used to replicate the results of the manuscript titled 'Functional MRI of the brain stem for assessing its autonomic functions: from imaging parameters and analysis to functional atlas.' This manuscript presented an optimised functional imaging brainstem imaging protocol (FIBS). Skulls were removed from the shared MRI images, and brain images were normalised to the Montreal Neurological Institute (MNI) space to protect participants' privacy. Details of pre-processing were provided in the paper mentioned above. The atlas includes 12 regions of interest (ROIs) in the brain stem involving automatic controls. This dataset could potentially be used to: 1. compare temporal signal-to-noise ratios among different imaging protocols; 2. provide the brain stem anatomic locations involved in autonomic controls; 3. add to the normal control database for brain stem studies.

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    ZENODO
    Dataset . 2024
    License: CC BY
    Data sources: ZENODO
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      ZENODO
      Dataset . 2024
      License: CC BY
      Data sources: ZENODO
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Takashima, A. (Atsuko); Francesca Carota; Schoots, V.C.; Redmann, A.; +2 Authors

    When preparing to name an object, semantic knowledge about the object and its attributes is activated, including perceptual properties. It is unclear, however, whether semantic attribute activation contributes to lexical access or is a consequence of activating a concept irrespective of whether that concept is to be named or not. In this study, we measured neural responses using fMRI while participants named objects that are typically green or red, presented in black line drawings. Furthermore, participants underwent two other tasks with the same objects, color naming and semantic judgment, to see if the activation pattern we observe during picture naming is (a) similar to that of a task that requires accessing the color attribute and (b) distinct from that of a task that requires accessing the concept but not its name or color. We used representational similarity analysis to detect brain areas that show similar patterns within the same color category, but show different patterns across the two color categories. In all three tasks, activation in the bilateral fusiform gyri (“Human V4”) correlated with a representational model encoding the red–green distinction weighted by the importance of color feature for the different objects. This result suggests that when seeing objects whose color attribute is highly diagnostic, color knowledge about the objects is retrieved irrespective of whether the color or the object itself have to be named.

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ https://doi.org/10.3...arrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    https://doi.org/10.34973/agnh-...
    Dataset . 2024
    License: CC 0
    Data sources: Datacite
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ https://doi.org/10.3...arrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      https://doi.org/10.34973/agnh-...
      Dataset . 2024
      License: CC 0
      Data sources: Datacite
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/

    This dataset comprises a curated collection of articles. Each article is categorized according to four key criteria: 1) the specific type of article, 2) the year of publication, 3) the variety of neuroscience tools employed (applicable only to empirical studies), and 4) its relevance and application in the field of marketing.For specifics on the data see: Alvino, L., Pavone, L., Abhishta, A., & Robben, H. (2020). Picking Your Brains: Where and How Neuroscience Tools Can Enhance Marketing Research. Frontiers in Neuroscience, 14, Article 577666. https://doi.org/10.3389/fnins.2020.577666

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ University of Twente...arrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    4TU.ResearchData
    Dataset . 2023
    License: CC BY NC
    Data sources: 4TU.ResearchData
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    4TU.ResearchData
    Dataset . 2024
    License: CC BY NC
    Data sources: 4TU.ResearchData
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    4TU.ResearchData
    Dataset . 2024
    License: CC BY NC
    Data sources: 4TU.ResearchData
    4TU.ResearchData | science.engineering.design
    Dataset . 2024
    License: CC BY NC
    Data sources: Datacite
    4TU.ResearchData | science.engineering.design
    Dataset . 2024
    License: CC BY NC
    Data sources: Datacite
    4TU.ResearchData | science.engineering.design
    Dataset . 2023
    License: CC BY NC
    Data sources: Datacite
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ University of Twente...arrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      4TU.ResearchData
      Dataset . 2023
      License: CC BY NC
      Data sources: 4TU.ResearchData
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      4TU.ResearchData
      Dataset . 2024
      License: CC BY NC
      Data sources: 4TU.ResearchData
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      4TU.ResearchData
      Dataset . 2024
      License: CC BY NC
      Data sources: 4TU.ResearchData
      4TU.ResearchData | science.engineering.design
      Dataset . 2024
      License: CC BY NC
      Data sources: Datacite
      4TU.ResearchData | science.engineering.design
      Dataset . 2024
      License: CC BY NC
      Data sources: Datacite
      4TU.ResearchData | science.engineering.design
      Dataset . 2023
      License: CC BY NC
      Data sources: Datacite
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
Advanced search in Research products
Research products
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
The following results are related to Neuroinformatics. Are you interested to view more results? Visit OpenAIRE - Explore.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Yeung, Jonas; DeYoung, Taylor; Spring, Shoshana; de Guzman, A. Elizabeth; +5 Authors

    Description Biological sex influences prevalence of developmental disorders through sex hormones and sex chromosomes. However, our understanding of their impacts in neurodevelopment and response to injury remains limited. In this project, we use high resolution magnetic resonance imaging (MRI) to investigate the four core genotype mouse model (FCG) that separates the influences of sex hormones and sex chromosomes during normal brain development and after cranial radiation therapy. Sex differences are attributed to either sex hormones or sex chromosomes. This can be distinguished by the FCG model which decouples the sex determining region (SRY) from the Y chromosome by moving SRY onto an autosome. This gives us four core sex genotypes: XX NULL, XY NULL, XX SRY, and XY SRY. This dataset represents the most comprehensive mouse brain imaging study employing the FCG model to date with 5 timepoints (P14, P23, P42, P63, P98), Ccl2 wildtype (+/+) and knockouts (-/-), irradiation (7Gy) and sham (0Gy) mice. All in all, a total of 1071 images! The dataset presented here is for an upcoming manuscript to be published. In vivo MRI scans were obtained using a 7-T MRI scanner (Bruker BioSpin, Ettlingen, Germany) equipped with four cryocoils for simultaneous imaging of four mice. The scans were performed with the following settings: T1-weighted, 3D-gradient echo sequence, 75μm isotropic resolution, TR=26ms, TE=8.25ms, flip angle=26°, field of view=25×22×22mm, and matrix size=334×294×294. All structural MR images are stored in images.tar.gz. Images were segmented and registered using an automated pipeline which are stored in labels.tar.gz. The consensus average and labels are final_average.mnc and final_labels.mnc, respectively. Extracted structure volumes alongside the metadata are included in df_micevolumes.csv. Structural MRIs are in MINC format and the readme.txt provides further information on this dataset. The authors express their sincere gratitude for the research funding recieved from the Canadian Institutes of Health Research (158622, 168037) and the Ontario Institute for Cancer Research (IA-024) with funding from the Government of Ontario and Restracomp from the SIckKids Research Training Centre. Code/Software MINChttps://www.bic.mni.mcgill.ca/ServicesSoftware/MINC RMINChttps://github.com/Mouse-Imaging-Centre/RMINC PydPiperhttps://github.com/Mouse-Imaging-Centre/pydpiper/tree/v2.0.19.1

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    ZENODO
    Dataset . 2024
    License: CC BY
    Data sources: ZENODO
    ZENODO
    Dataset . 2024
    License: CC BY
    Data sources: Datacite
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      ZENODO
      Dataset . 2024
      License: CC BY
      Data sources: ZENODO
      ZENODO
      Dataset . 2024
      License: CC BY
      Data sources: Datacite
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Hallquist, Michael; Hwang, Kai; Luna, Beatriz; Dombrovski, Alexandre;

    fMRI acquisition Neuroimaging data during the clock task were acquired in a Siemens Tim Trio 3T scanner for the original study and Siemens Tim Prisma 3T scanner for the replication study at the Magnetic Resonance Research Center, University of Pittsburgh. Due participant-dependent variation in response times on the task, each fMRI run varied in length from 3.15 to 5.87 minutes (M = 4.57 minutes, SD = 0.52). Functional imaging data for the original/replication study were acquired using a simultaneous multislice sequence sensitive to BOLD contrast, TR = 1.0/0.6s, TE = 30/27ms, flip angle = 55/45°, multiband acceleration factor = 5/5, voxel size = 2.3/3.1mm3. We also obtained a sagittal MPRAGE T1-weighted scan, voxel size = 1/1mm3, TR = 2.2/2.3s, TE = 3.58/3.35ms, GRAPPA 2/2x acceleration. The anatomical scan was used for coregistration and nonlinear transformation to functional and stereotaxic templates. We also acquired gradient echo fieldmap images (TEs = 4.93/4.47ms and 7.39/6.93ms) for each subject to mitigate inhomogeneity-related distortions in the functional MRI data. Preprocessing of fMRI data Anatomical scans were registered to the MNI152 template (82) using both affine (ANTS SyN) and nonlinear (FSL FNIRT) transformations. Functional images were preprocessed using tools from NiPy (83), AFNI (version 19.0.26) (84), and the FMRIB software library (FSL version 6.0.1) (85). First, slice timing and motion coregistration were performed simultaneously using a four-dimensional registration algorithm implemented in NiPy (86). Non-brain voxels were removed from functional images by masking voxels with low intensity and by the ROBEX brain extraction algorithm (87). We reduced distortion due to susceptibility artifacts using fieldmap correction implemented in FSL FUGUE. Participants’ functional images were aligned to their anatomical scan using the white matter segmentation of each image and a boundary-based registration algorithm (88), augmented by fieldmap unwarping coefficients. Given the low contrast between gray and white matter in echoplanar scans with fast repetition times, we first aligned functional scans to a single-band fMRI reference image with better contrast. The reference image was acquired using the same scanning parameters, but without multiband acceleration. Functional scans were then warped into MNI152 template space (2.3mm output resolution) in one step using the concatenation of functional-reference, fieldmap unwarping, reference-structural, and structural-MNI152 transforms. Images were spatially smoothed using a 5mm full-width at half maximum (FWHM) kernel using a nonlinear smoother implemented in FSL SUSAN. To reduce head motion artifacts, we then conducted an independent component analysis for each run using FSL MELODIC. The spatiotemporal components were then passed to a classification algorithm, ICA-AROMA, validated to identify and remove motion-related artifacts (89). Components identified as noise were regressed out of the data using FSL regfilt (non-aggressive regression approach). ICA-AROMA has performed very well in head-to-head comparisons of alternative strategies for reducing head motion artifacts (90). We then applied a .008 Hz temporal high-pass filter to remove slow-frequency signal changes (91); the same filter was applied to all regressors in GLM analyses. Finally, we renormalized each voxel time series to have a mean of 100 to provide similar scaling of voxelwise regression coefficients across runs and participants. Treatment of head motion In addition to mitigating head motion-related artifacts using ICA-AROMA, we excluded runs in which more than 10% of volumes had a framewise displacement (FD) of 0.9mm or greater, as well as runs in which head movement exceeded 5mm at any point in the acquisition. This led to the exclusion of 11 runs total, yielding 549 total usable runs across participants. Furthermore, in voxelwise GLMs, we included the mean time series from deep cerebral white matter and the ventricles, as well as first derivatives of these signals, as confound regressors (90). MEG Data acquisition MEG data were acquired using an Elekta Neuromag VectorView MEG system (Elekta Oy, Helsinki, Finland) in a three-layer magnetically shielded room. The system comprised of 306 sensors, with 204 planar gradiometers and 102 magnetometers. In this project we only included data from the gradiometers, as data from magnetometers added noise and had a different amplitude scale. MEG data were recorded continuously with a sampling rate of 1000 Hz. We measured head position relative to the MEG sensors throughout the recording period using 4 continuous head position indicators (cHPI) that continuously emit sinusoidal signals, and head movements were corrected offline during preprocessing. To monitor saccades and eye blinks, we used two bipolar electrode pairs to record vertical and horizontal electrooculogram (EOG). Preprocessing of MEG data Flat or noisy channels were identified with manual inspections, and all data preprocessed using the temporal signal space separation (TSSS) method (92, 93). TSSS suppresses environmental artifacts from outside the MEG helmet and performs head movement correction by aligning sensor-level data to a common reference (94). This realignment allowed sensor-level data to be pooled across subjects group analyses of sensor-space data. Cardiac and ocular artifacts were then removed using an independent component analysis by decomposing MEG sensor data into independent components (ICs) using the infomax algorithm (95). Each IC was then correlated with ECG and EOG recordings, and an IC was designated as an artifact if the absolute value of the correlation was at least three standard deviations higher than the mean of all correlations. The non-artifact ICs were projected back to the sensor space to reconstruct the signals for analysis. After preprocessing, data were epoched to the onset of feedback, with a window from -0.7 to 1.0 seconds. Trials with gradiometer peak-to-peak amplitudes exceeded 3000 fT/cm were excluded. Please note that the following processing step has NOT been applied to MEG data: "For each sensor, we computed the time-frequency decomposition of activity on each trial by convolving time-domain signals with Morlet wavelet, stepping from 2 to 40 Hz in logarithmic scale using 6 wavelet cycles". # Reward-based option competition in human dorsal stream and transition from stochastic exploration to exploitation in continuous space Behavioral, fMRI and MEG data. ## Description of the data and file structure * Directories and file within hallquist_etal_supplemental_data.zip: \######################################################## fig_1: behavioral data from the fMRI study trial_data_compact.RData - RData file with the following variables: $ dataset: study name $ id: participants's numeric id $ run: sequential number of the current 50-trial block, 1-8 $ trial: trial $ rewFunc: Contingency, "DEV","CEV","CEVR", "IEV" $ rt_csv: response time in seconds $ magnitude: expected reward magnitude $ probability: expected reward probability $ ev: expected reward value $ rt_vmax: response time with the highest learned value, as predicted by the SCEPTIC model $ score_csv: reward received \######################################################## fig_2: DAN parcellation, whole-brain statistical parametric maps (BOLD signal) entropy_change_wb_unthresholded_1mm.nii.gz: .nii file of the un-thresholded parametric entropy change map entropy_wb_unthresholded_1mm.nii.gz: .nii file of the un-thresholded parametric entropy map Schaefer_444_final_2009c_1.0mm.nii.gz: .nii file of the Schaefer et al.'s 400 parcellation in MNI 2009c space Schaefer2018_DAN_2009c_FINAL47.nii.gz: same, but only dorsal attention stream regions \######################################################## fig_3: deconvolved DAN BOLD signal, same parcellation as in fig_2 rt_aligned_deconvolved_bold.RData: RData file with the following variables: $ id: participants's numeric id $ run: sequential number of the current 50-trial block, 1-8 $ run_trial: trial within run (1:50), note the difference from the behavioral data file "trial" variable $ feedback_onset: onset of feedback, in seconds $ rewFunc: Contingency, "DEV","CEV","CEVR", "IEV" $ atlas_value: number of dorsal stream node as in Table S2 $ label: label of dorsal stream node as in Table S2 and Figure S2 $ decon_interp: deconvolved BOLD signal $ side: right ("R") or left ("L") \######################################################## fig_4: BOLD regional regression coefficients corresponding to entropy change maps in fig_2 \## entropy_change_betas.csv.gz: text file with the following variables: $ id: participant's numeric id $ atlas_value: number of dorsal stream node as in Table S2 $ x, y, z: MNI coordinates $ value: mean regional regression coefficient for entropy change \######################################################## fig_5: MEG time-frequency domain statistics for entropy change meg_time_frequency_entropy_change_ri.rds: .rds (R Data Serialization) file with the following variables: $ Time: time in seconds relative to feedback $ Freq: frequency, Hz $ estimate: regression coefficient, estimate $ std.error: regression coefficient, standard error $ statistic: test statistic $ df: degrees of freedom $ p.value: uncorrected p-value $ p_fdr: FDR-corrected p-value ## Code/Software SCEPTIC computational model: [10.5281/zenodo.1336285](https://zenodo.org/doi/10.5281/zenodo.1336285) Primates exploring and exploiting a continuous sensorimotor space rely on dynamic maps in the dorsal stream. Two complementary perspectives exist on how these maps encode rewards. Reinforcement learning models integrate rewards incrementally over time, efficiently resolving the exploration/exploitation dilemma. Working memory buffer models explain rapid plasticity of parietal maps but lack a plausible exploration/exploitation policy. The reinforcement learning model presented here unifies both accounts, enabling rapid, information-compressing map updates and efficient transition from exploration to exploitation. As predicted by our model, activity in human fronto-parietal dorsal stream regions, but not in MT+, tracks the number of competing options, as preferred options are selectively maintained on the map while spatiotemporally distant alternatives are compressed out. When valuable new options are uncovered, posterior beta1/alpha oscillations desynchronize within 0.4-0.7 s, consistent with option encoding by competing beta1-stabilized subpopulations. Altogether, outcomes matching locally cached reward representations rapidly update parietal maps, biasing choices toward often-sampled, rewarded options.

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ DRYAD; ZENODOarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    DRYAD; ZENODO
    Dataset . 2024
    License: CC 0
    Data sources: Datacite; ZENODO
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ DRYAD; ZENODOarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      DRYAD; ZENODO
      Dataset . 2024
      License: CC 0
      Data sources: Datacite; ZENODO
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Hayder, Amin;

    Sequencing data (fastq.gz) was processed using the 10X Genomics Space Ranger Count pipeline. L75027 = Mouse_Hippocampus_SD L75031 = Mouse_Hippocampus_ENR Outputs include a spatial folder containing outputs that capture the spatiality of the data and a filtered_feature_bc matrix folder containing the tissue-associated barcodes in MEX format. spatial/ aligned_fiducials.jpg: Aligned fiducials QC image detected_tissue_image.jpg: Detected tissue QC image scalefactors_json.json: Scale conversion factors for spot diameter and coordinates at various image resolutions tissue_hires_image.png: Downsampled full resolution image tissue_lowres_image.png: Full resolution image downsampled to 600 pixels on the longest dimension tissue_positions_list.csv: CSV containing spot barcode filtered_feature_bc_matrix.h5: Contains only tissue-associated barcodes in in HDF5 format filtered_feature_bc_matrix/ barcodes.tsv.gz: List of barcodes features.tsv.gz: List of features matrix.mtx.gz: Count matrix

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    ZENODO
    Dataset . 2024
    License: CC BY
    Data sources: ZENODO
    ZENODO
    Dataset . 2024
    License: CC BY
    Data sources: Datacite
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      ZENODO
      Dataset . 2024
      License: CC BY
      Data sources: ZENODO
      ZENODO
      Dataset . 2024
      License: CC BY
      Data sources: Datacite
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Hayder, Amin;

    Raw data (.brw) was recorded with BrainWave SW and detected LFP events stored in (.bxr) for SD and ENR groups. These extracellular recordings were obtained from acute hippocampal-cortical slices and were collected at 14KHz/electrode sampling frequency. 

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    ZENODO
    Dataset . 2024
    License: CC BY
    Data sources: ZENODO
    ZENODO
    Dataset . 2024
    License: CC BY
    Data sources: Datacite
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      ZENODO
      Dataset . 2024
      License: CC BY
      Data sources: ZENODO
      ZENODO
      Dataset . 2024
      License: CC BY
      Data sources: Datacite
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Bastiaansen, Wietske; Rousian, Melek; Koning, Anton; Niessen, Wiro; +3 Authors

    Early detection of (patho)physiological variation during prenatal neuro-development is challenging due to the limited knowledge of the normal physiological development of the embryonic and early fetal brain. To provide a detailed picture of normal embryonic and early fetal brain development, we created the 4D Human Embryonic Brain Atlas: a spatiotemporal atlas based on three-dimensional (3D) ultrasound images acquired between 8 and 12 weeks of gestational age. The 4D Human Embryonic Brain Atlas was created using a deep learning approach for groupwise image registration, which takes into account the rapid morphological changes during the first trimester. The 4D Human Embryonic Brain Atlas was created and validated using 831 3D ultrasound volumes from 402 subjects of the Rotterdam Periconceptional cohort. The 4D Human Embryonic Brain Atlas provides unique insight into the crucial early life period and has the potential to enhance the detection, prevention, and treatment of prenatal neurodevelopmental disorders. 

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    ZENODO
    Dataset . 2024
    License: CC BY NC
    Data sources: ZENODO
    ZENODO
    Dataset . 2024
    License: CC BY NC
    Data sources: Datacite
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      ZENODO
      Dataset . 2024
      License: CC BY NC
      Data sources: ZENODO
      ZENODO
      Dataset . 2024
      License: CC BY NC
      Data sources: Datacite
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Lettieri, Giada; Handjaras, Giacomo; Cappello, Elisa Morgana; Setti, Francesca; +9 Authors

    # Dissecting abstract, modality-specific, and experience-dependent coding of affect in the human brain ## Description of the data and file structure #### Behavioral data and code **behavioral/code/** -> all MATLAB functions needed to analyze the behavioral data (i.e., categorical and valence ratings collected during the auditory, visual, and multisensory experiments). **behavioral/data/categorical_audio/** -> 20 folders (one for each participant) storing the categorical annotations of emotion collected during the auditory experiment. The folder also includes subs_demographics.csv, which stores participants' demographics. In each participant's folder, there are 6 MATLAB workspace files (*_run0?.mat) storing ratings of the affective experience for each run. In each workspace, there is a structure called *ReMoTa_Output*, which contains all behavioral data and experiment details. Specifically: -*ReMoTa_Output.Date_Experiment* stores the date of the experiment. -*ReMoTa_Output.MovieFile* stores the path to the stimulus file. -*ReMoTa_Output.MovieHeightInDeg* is the height of the video in visual degrees. -*ReMoTa_Output.MovieWidthInDeg* is the width of the video in visual degrees. -*ReMoTa_Output.Ratings* stores the emotion annotations provided by the participant. -*ReMoTa_Output.RatingsSamplingFrequency* the sampling frequency of emotion annotations in Hz. -*ReMoTa_Output.ResponseTime* is the timing of the button press (not very useful). -*ReMoTa_Output.StepsInIntensity* is the number of levels of intensity that could be specified for each emotional instance (if == 1, then only presence/absence). -*ReMoTa_Output.Subject* is the participant id. -*ReMoTa_Output.TaggingCategories* stores the labels (in Italian) of the emotion categories used in the experiment. the order reflects the rows in *ReMoTa_Output.Ratings*. **behavioral/data/categorical_audiovideo/** -> 22 folders (one for each participant) storing the categorical annotations of emotion collected during the multisensory experiment. The folder also includes subs_demographics.csv, which stores participants' demographics. In each participant's folder, there are 6 MATLAB workspace files (*_run0?.mat) storing ratings of the affective experience for each run. In each workspace, there is a structure called *ReMoTa_Output*, which contains all behavioral data and experiment details. The organization of the data structure is identical across conditions (please refer to the previous section for further details). **behavioral/data/categorical_video/** -> 20 folders (one for each participant) storing the categorical annotations of emotion collected during the visual experiment. The folder also includes subs_demographics.csv, which stores participants' demographics. In each participant's folder, there are 6 MATLAB workspace files (*_run0?.mat) storing ratings of the affective experience for each run. In each workspace, there is a structure called *ReMoTa_Output*, which contains all behavioral data and experiment details. The organization of the data structure is identical across conditions (please refer to the previous section for further details). **behavioral/data/valence_audio/** -> 20 folders (one for each participant) storing the valence ratings collected during the auditory experiment. The folder also includes subs_demographics.csv, which stores participants' demographics. In each participant's folder, there are 6 MATLAB workspace files (*_run0?.mat) storing ratings of the affective experience for each run.In each workspace, there is a structure called *ReMoTa_Output*, which contains all behavioral data and experiment details. The organization of the data structure is identical across conditions (please refer to the previous section for further details). **behavioral/data/valence_audiovideo/** -> 21 folders (one for each participant) storing the valence ratings collected during the multisensory experiment. The folder also includes subs_demographics.csv, which stores participants' demographics. In each participant's folder, there are 6 MATLAB workspace files (*_run0?.mat) storing ratings of the affective experience for each run. In each workspace, there is a structure called *ReMoTa_Output*, which contains all behavioral data and experiment details. The organization of the data structure is identical across conditions (please refer to the previous section for further details). **behavioral/data/valence_video/** -> 21 folders (one for each participant) storing the valence ratings collected during the visual experiment. The folder also includes subs_demographics.csv, which stores participants' demographics. In each participant's folder, there are 6 MATLAB workspace files (*_run0?.mat) storing ratings of the affective experience for each run. In each workspace, there is a structure called *ReMoTa_Output*, which contains all behavioral data and experiment details. The organization of the data structure is identical across conditions (please refer to the previous section for further details). #### Neuroimaging data and code **fmri/code/** -> fmri_preprocessing.sh is the bash script to preprocess raw fMRI data. Requires AFNI, ANTs, and FSL. All other *.m files are MATLAB functions needed to analyze the fMRI data (e.g., conjunction_univariate.m, mvpa_classification.m). **fmri/data/audio/** -> this folder stores preprocessed fMRI data for 21 participants (????_sub-???_allruns-cleaned_reml2mni.nii.gz) collected during the auditory experiment. The filename indicates the participant's sensory experience (i.e., blind for congenitally blind individuals; ctrl for typically-developed people) and the id, which matches what we report in Table 2 (e.g., sub-033). In addition, this folder stores the single-participant results of the voxelwise encoding analysis (????_sub-???_allruns-cleaned_encoding_results_categorical_nn1.nii.gz), as well as the results of the non-parametric combination at the group level (????_audio_encoding_npc_fwe_nn1.nii.gz) and the comparison between the categorical and dimensional models in terms of fitting brain activity (????_audio_adjr2_cat-dim.nii.gz). **fmri/data/audiovideo/** -> this folder stores preprocessed fMRI data for 10 participants (ctrl_sub-???_allruns-cleaned_reml2mni.nii.gz) collected during the multisensory experiment. The filename indicates the participant's id, which matches what we report in Table 2 (e.g., sub-012). In addition, this folder stores the single-participant results of the voxelwise encoding analysis (ctrl_sub-???_allruns-cleaned_encoding_results_categorical_nn1.nii.gz), as well as the results of the non-parametric combination at the group level (ctrl_audiovideo_encoding_npc_fwe_nn1.nii.gz) and the comparison between the categorical and dimensional models in terms of fitting brain activity (ctrl_audiovideo_adjr2_cat-dim.nii.gz). **fmri/data/video/** -> this folder stores preprocessed fMRI data for 19 participants (????_sub-???_allruns-cleaned_reml2mni.nii.gz) collected during the visual experiment. The filename indicates the participant's sensory experience (i.e., deaf for congenitally deaf individuals; ctrl for typically-developed people) and the id, which matches what we report in Table 2 (e.g., sub-020). In addition, this folder stores the single-participant results of the voxelwise encoding analysis (????_sub-???_allruns-cleaned_encoding_results_categorical_nn1.nii.gz), as well as the results of the non-parametric combination at the group level (????_video_encoding_npc_fwe_nn1.nii.gz) and the comparison between the categorical and dimensional models in terms of fitting brain activity (????_video_adjr2_cat-dim.nii.gz). **fmri/data/gm_010_final.nii.gz** -> the mask used in voxelwise encoding analysis. **fmri/data/vmpfc_neurosynth_mask_nn1_gm_masked.nii.gz** -> a vmPFC mask obtained from neurosynth, which has been employed in testing the association between average activity of this region and the emotion model. **fmri/data/npc_overlap_conditions_groups_fwe_clust_nn1.nii.gz** -> a map showing the overlap between brain regions encoding the emotion model across groups and conditions. **fmri/data/npc_overlap_conditions_groups_fwe_clust_nn1_thr1_min20.nii.gz** -> a mask of the emotion network used to classify participants' sensory experience and stimulus modality. **fmri/data/npc_overlap_conditions_groups_fwe_clust_nn1_thr1_min20_mpfc.nii.gz** -> a mask of mPFC used to classify participants' sensory experience and stimulus modality. **fmri/data/npc_overlap_conditions_groups_fwe_clust_nn1_thr1_min20_rsts.nii.gz** -> a mask of right STS used to classify participants' sensory experience and stimulus modality. **fmri/data/npc_overlap_conditions_groups_fwe_clust_nn1_thr1_min20_lsts.nii.gz** -> a mask of left STS used to classify participants' sensory experience and stimulus modality. **fmri/data/mvpa_classifier_results_feature_relevance.nii.gz** -> a map showing feature relevance for classifying participants' sensory experience and the stimulus modality from the activity of regions encoding the emotion model. **fmri/data/avg_across_groups_and_conditions_mpfc_masked_zscore_significant_emotions.nii.gz** -> a map showing the average standardized fitting coefficients of emotions significantly classified from mPFC activity across groups and conditions. #### False positive rate code **fpr_simulation/** -> this folder stores the code to verify that our NPC method ensures a rate of false positive results is in line with what is expected based on the alpha level. **fpr_simulation.m** is the main script to run, **generate synthetic_fmri.m** is a function that generates fake fMRI timeseries, **categorical_encoding_matrix_tpshuffle.mat** stores the emotion model used in voxelwise encoding analyses. the workspace contains two variables: *categorical_encoding_matrix*, the actual encoding matrix, and *categorical_encoding_matrix_null*, the null encoding matrix generated by shuffling the timepoints before convolution. **R2_audiovideo_subjects.mat** contains the fitting values of the emotion model across all voxels and participants in the multisensory condition (variable: *timeserie_R2*) and their xyz coordinates in MNI space (variable: *coordinates*). **npc_fpr_1000_exp_10_part_80_vox_1614_tps_16_pred_2001_perms.mat** is the workspace storing the results of the simulation for 1,000 experiments, 10 participants, 80 voxels, 1,614 timepoints, 16 predictors, and 2000 permutations. relevant variables in the workspace are: -*all_fwc_pvalues* is a 2D matrix (experiments by voxels) storing the familywise corrected pvalues for each voxel and experiment. -*categorical_encoding_matrix* is a 3D matrix (timepoints by predictors by permutations) storing the encoding model and its permuted versions based on timepoint shuffling. -*experiment_data* is a 3D matrix (timepoints by voxels by participants) storing fmri data from one simulated experiment. a new matrix of simulated data is generated for each experiment. -*fpr* is the false positive rate obtained from the simulation after correction for multiple comparisons. -*fpr_ci* is the 95th confidence interval of the false positive rate -*rsquared* is a 3D matrix (participants by voxels by permutations) storing the coefficient of determination obtained by fitting the encoding model to fmri data in one simulated experiment. a new matrix of coefficients is obtained from each experiment. -*significant_experiments* is a logical column array with the number of elements equal to the number of simulated experiments. at each position, the value is either 0 (false) if all voxels do not pass the familywise corrected threshold (*fwc_alpha*) or 1 (true) if at least one voxel reaches statistical significance (false positive experiment). **npc_fpr_1000_exp_10_part_80_vox_1614_tps_16_pred_2001_perms_shuffle_after_convolution.mat** is the workspace storing the results of the same simulation with the only exception that timepoint shuffling is applied after convolution (i.e., wrong method). therefore, the variables' names are identical to those in the **npc_fpr_1000_exp_10_part_80_vox_1614_tps_16_pred_2001_perms.mat** workspace. ## Experiment Info In the main folder, the file **experiment_info.mat** stores information about the fmri and behavioral experiments most scripts require. These details are the tagging categories (i.e., *experiment_info.emotion_categories*), the number of timepoints in the fmri acquisition for each run (i.e., *experiment_info.fmri_runs_duration*), the sampling frequency in the behavioral experiment (i.e., *experiment_info.behavioral_sampling_freq*), and the temporal resolution of the fmri acquisition in seconds (i.e., *experiment_info.fmri_tr_in_sec*). ## Code/Software In the main folder, the **behavioral_parent_script.m** and **fmri_parent_script.m** files contain examples to replicate the analyses reported in our work. For the code to run properly, you need **SPM12** () and **MATLAB Tools for NIfTI and ANALYZE image** () in the MATLAB path. Also, the **chaotic system toolbox** () and the functions *mpcdf.m*, *mpinv*, and *wachter.m* () should be added to the MATLAB path. For exporting figures we use the **export_fig** MATLAB package () and colorbrewer colormaps (). All analyses are implemented in MATLAB R2022a. Description of scripts and functions for the **analysis of behavioral data**: -*behavioral_parent_script.m* ``` running the behavioral_parent_script.m file will result in all behavioral analyses being performed and individual figures being created. Specifically, the script evaluates the most frequent emotions for each condition. It also generates emotion-by-emotion representational dissimilarity matrices (RDMs) from categorical annotations collected under the three experimental conditions and affective norms reported in Warriner et al., 2013. Then, it computes Kendall's tau correlation between all pairings of RDMs to assess the similarity of emotions in a latent space across conditions and between behavioral annotations and affective norms. Lastly, it computes principal components (PCs) from categorical ratings and estimates the correlation between PCs and behavioral valence ratings. ``` -*behavioral/code/analyze_emotion_ratings.m* ``` the analyze_emotion_ratings.m function takes as input the parent directory storing categorical annotation of emotions from multiple participants and produces single-participant and group-level timeseries (i.e., the number of participants reporting an emotional instance for each timepoint) of emotion ratings downsampled to fmri resolution. usage: [ratings,downsampled_ratings,aggregated_ratings] = analyze_emotion_ratings('behavioral/data/categorical_audio/',experiment_info.fmri_runs_duration,experiment_info.behavioral_sampling_freq,experiment_info.fmri_tr_in_sec) ------- INPUT: behavioral/data/categorical_audio/: path to behavioral data experiment_info.fmri_runs_duration: duration of fmri runs experiment_info.behavioral_sampling_freq: sampling frequency of behavioral data experiment_info.fmri_tr_in_sec: repetition time of fmri scan ------- OUTPUT: ratings: each participant's emotion annotations not downsampled to the fmri temporal resolution downsampled_ratings: each participant's emotion annotations downsampled to the fmri temporal resolution aggregated_ratings: group-level emotion annotations downsampled to the fmri temporal resolution ``` -*behavioral/code/analyze_valence_ratings.m* ``` the analyze_valence_ratings.m function takes as input the parent directory storing valence scores from multiple participants and produces single-participant and group-level timeseries (i.e., the median valence for each timepoint) of valence downsampled to fmri resolution. usage: [ratings,downsampled_ratings,aggregated_ratings] = analyze_valence_ratings('behavioral/data/valence_audio/',experiment_info.fmri_runs_duration,experiment_info.behavioral_sampling_freq,experiment_info.fmri_tr_in_sec) ------- INPUT: behavioral/data/valence_audio/: path to behavioral data experiment_info.fmri_runs_duration: duration of fmri runs experiment_info.behavioral_sampling_freq: sampling frequency of behavioral data experiment_info.fmri_tr_in_sec: repetition time of fmri scan ------- OUTPUT: ratings: each participant's valence ratings not downsampled to the fmri temporal resolution downsampled_ratings: each participant's valence ratings downsampled to the fmri temporal resolution aggregated_ratings: group-level valence ratings downsampled to the fmri temporal resolution ``` -*behavioral/code/jaccard_agreement.m* ``` this script computes the agreement between participants in emotion annotations and its significance. Agreement is computed using Jaccard coefficients and significance is established through permutation testing. The script also produces Figures reported in the Supplementary materials of our paper. ``` -*behavioral/code/prepare_encoding_regressors_new.m* ``` the prepare_encoding_regressors_new.m function creates the encoding model and its null version for voxelwise encoding analyses. This version of the script also computes principal components from categorical ratings of emotion and set the optimal number of PCs using the Wachter method. Please note that the prepare_encoding_regressors.m function is an older version of the function that does not include the Wachter method. [encoding_matrix,encoding_matrix_null,encoding_matrix_pc,encoding_matrix_pc_null,pc_coefficients,explained_variance,n_optimal_components] = prepare_encoding_regressors_new(aggregated_ratings,agreement_threshold,hrf_convolution,hrf_parameters,fmri_tr_in_sec,add_intercept,scaling,n_perm,random_seed,do_pc) ------- INPUT: aggregated_ratings: group-level timeseries of emotion ratings (typically the output of analyze_emotion_ratings.m) agreement_threshold: the minimum number of participants reporting an emotional instance in the same timepoint (e.g., 2) hrf_convolution: if set to 'yes' then emotion ratings are convolved using a hemodynamic response function. hrf_parameters: the parameters for hrf convolution (e.g., [6 16 1 1 6 0 32] the standard in SPM12). fmri_tr_in_sec: the temporal resolution of the fmri acquisition. add_intercept: if set to 'yes' the intercept is added to the encoding model. scaling: if set to 'yes' the encoding matrix is scaled based on overall maximum agreement across subjects. n_perm: number of permutations of the timepoints under the null hypothesis (e.g., 2000). random_seed: the randomization seed for reproducibility (e.g., 15012018). do_pc: if set to 'yes' the function computes also principal components and determines the optimal number of PCs using the wachter method. ------- OUTPUT: encoding_matrix: the encoding matrix based on categorical ratings of emotion. encoding_matrix_null: the null encoding matrix obtained from timepoint shuffling of the original encoding matrix. encoding_matrix_pc: the encoding matrix based on principal components (i.e., dimensional model). encoding_matrix_pc_null: the null encoding matrix based on principal components. pc_coefficients: the coefficients of PCs. explained_variance: the variance explained by each PC. n_optimal_components: the optimal number of components according to the wachter method. ``` Description of scripts and functions for the **analysis of fmri data**: -*fmri_parent_script.m* ``` running the fmri_parent_script.m file will result in all fmri analyses being performed. Specifically, it will perform voxelwise encoding analysis at the single-participant level and will estimate the group-level significance using a non-parametric combination approach. Univariate conjunction analyses are also performed on group-level results obtained from all groups and conditions. In addition, the script also provides results for univariate contrasts between groups and conditions. Running the script, multivoxel pattern classification of participants' sensory experience and stimulus modality is also performed, as well as crossdecoding of valence from regions encoding the emotion model. ``` -*fmri/code/voxelwise_encoding_cluster_corr_new.m* ``` the voxelwise_encoding_cluster_corr_new.m function performs voxelwise encoding at the single-participant level usage: voxelwise_encoding_results = voxelwise_encoding_cluster_corr_new(encoding_matrix,encoding_matrix_null,p_forming_thr,nn_type,fwe_threshold,fmri_parent_directory,input_files,output_suffix,fmri_mask_file,demean_fmri,n_cpus,save_to_disk) ------- INPUT: encoding_matrix: the encoding matrix, typically the output of the prepare_encoding_regressors_new.m function. encoding_matrix_null: permuted versions of the encoding matrix also coming from prepare_encoding_regressors_new.m function. p_forming_thr: the cluster defining threshold (e.g., 0.001). nn_type: the type of connection to determine a cluster. nn_type = 1 means connected if faces touch; nn_type = 2 means connected if faces or edges touch; nn_type = 3 means connected if faces, edges or corners touch. fwe_threshold: the familywise corrected threshold (e.g., 0.05). fmri_parent_directory: the parent directory storing preprocessed single-participant fmri data. input_files: the name of nifti files (e.g., 'ctrl_sub-*_allruns-cleaned_reml2mni.nii'). output_suffix: the suffix of the output filename (e.g., '_encoding_results_categorical_nn1'). fmri_mask_file: a mask to limit the search for significance (e.g., 'fmri/data/gm_010_final.nii'). demean_fmri: if 'yes' mean center the fmri signal. n_cpus: number of cpus for parallel computing. save_to_disk: if 'yes' voxelwise encoding results are saved to disk. ------- OUTPUT: voxelwise_encoding_results: a matrix storing voxelwise encoding results for each voxel. ``` -*fmri/code/voxelwise_encoding_group_analysis_npc.m* ``` the voxelwise_encoding_group_analysis_npc.m function computes group results for categorical ratings using non-parametric combination. usage: voxelwise_group_results = voxelwise_encoding_group_analysis_npc(parent_dir,single_sub_matfiles,fmri_mask_file,output_filename,fwe_threshold,cluster_forming_thr,nn_type) ------- INPUT: parent_dir: the folder with MATLAB workspaces storing the results of the voxelwise encoding analysis for each participant. Typically the results of the voxelwise_encoding_cluster_corr_new.m function (e.g., 'fmri/data/audio/'). single_sub_matfiles: the prefix of single-participant MATLAB workspaces (e.g., 'ctrl_*_encoding_results_categorical_nn1.mat'). fmri_mask_file: a mask to limit the search for significance (e.g., 'fmri/data/gm_010_final.nii'). output_filename: the suffix of the output filename (e.g., 'ctrl_audio_encoding_npc_fwe_nn1.nii'). fwe_threshold: the familywise corrected threshold (e.g., 0.05). cluster_forming_thr: the cluster defining threshold (e.g., 0.001). nn_type: the type of connection to determine a cluster. ------- OUTPUT: voxelwise_group_results: a matrix storing the family-wise corrected pvalue for each voxel. ``` -*fmri/code/conjunction_univariate.m* ``` the conjunction_univariate.m function performs conjunctions between group-level results obtained from non-parametric combination. please refer to equations e-h in the manuscript for further details. usage: conjunction_results = conjunction_univariate(path_to_univariate_results,save_to_disk) ------- INPUT: path_to_univariate_results: specify the path pointing to all group-level voxelwise encoding results (e.g., 'fmri/data/*/*_encoding_npc_fwe_nn1.nii'). save_to_disk: if 'yes' conjunction results are saved to disk. ------- OUTPUT: conjunction_results: a structure storing results of univariate conjunction analyses. ``` -*fmri/code/voxelwise_group_comparison.m* ``` the voxelwise_group_comparison.m function compares the fitting obtained for the full emotion model between conditions and/or groups. usage: results = voxelwise_group_comparison(condition_a,condition_b,group_a,group_b,fmri_mask_file,nn_type,n_perm,cluster_forming_thr,number_of_cpus,save_to_disk) ------- INPUT: condition_a: the experimental condition of the first sample (e.g., audio). condition_b: the experimental condition of the second sample (e.g., audio). group_a: the sensory experience of the first sample (e.g., ctrl). group_b: the sensory experience of the second sample (e.g., blind). fmri_mask_file: a mask to limit the search for significance (e.g., 'fmri/data/gm_010_final.nii'). nn_type: the type of connection to determine a cluster. n_perm: number of permutations for establishing statistical significance (e.g., 2000). cluster_forming_thr: the cluster defining threshold (e.g., 0.001). number_of_cpus: number of cpus for parallel computing. save_to_disk: if 'yes' results are saved to disk. ------- OUTPUT: results: a structure storing results of univariate contrasts. ``` -*fmri/code/mvpa_classification.m* ``` the mvpa_classification.m function is used to classify participants sensory experience and the modality through which the emotion elicitation paradigm was administered from regions encoding the emotion model. we use a svm classifier and f1score as performance metric. usage: mvpa_classifier_output = mvpa_classification(parent_dir, nn_type, roi_file, n_folds, n_features, n_perms, performance_metric, save_to_disk, output_filename) ------- INPUT: parent_dir: the directory storing the single-participant workspaces obtained from the voxelwise_encoding_cluster_corr_new.m (e.g., 'fmri/data'). nn_type: the connection type used to define a cluster in single-participant analyses (e.g., 'nn1'). roi_file: a mask to determine the voxels used to perform the classification (e.g., 'fmri/data/npc_overlap_conditions_groups_fwe_clust_nn1_thr1_min20.nii'). n_folds: number of folds for the cross-validation procedure (e.g., 5). n_features: number of features (i.e., voxels) that are used in the classification analysis (e.g., 1000). n_perms: number of permutations for establishing the statistical significance of the classification. (e.g., 2000). performance_metric: the metric used to evaluate classifier performance (e.g., 'f1score'). save_to_disk: if 'yes' results are saved to disk. output_filename: a filename for the results (e.g., 'fmri/data/mvpa_classifier_results'). ------- OUTPUT: mvpa_classifier_output: a structure storing results of multivariate classification. ``` -*fmri/code/crossdecoding_ridge.m* ``` the crossdecoding_ridge.m function is employed to crossdecode valence from regions significantly associated to the emotion model. this to test whether some brain area map valence in a supramodal manner. one can explore different masks to assess the spatial specificity of results (e.g., entire network encoding the emotion model, mpfc roi). usage: crossdecoding_results = crossdecoding_ridge(fmri_data_dir, file_prefix, mask_file, random_seed, valence_data_dir, experiment_info, ridge_values, n_perm, save_output) ------- INPUT: fmri_data_dir: the directory storing preprocessed single-participant fmri data (e.g., 'fmri/data'). file_prefix: the prefix of single-participant nifti files (e.g., '_reml2mni.nii'). mask_file: a mask to determine the voxels used to crossdecode valence (e.g., 'fmri/data/npc_overlap_conditions_groups_fwe_clust_nn1_thr1_min20.nii'). random_seed: the randomization seed for reproducibility (e.g., 14051983). valence_data_dir: the parent directory storing behavioral ratings of valence (e.g., 'behavioral/data'). experiment_info: the variable containing experiment details (this is the structure stored in the experiment_info.mat workspace). ridge_values: the penalization values to be tested in the crossvalidation procedure (e.g., logspace(-3,2,1000)). n_perm: number of permutations for establishing the statistical significance (e.g., 2000). save_output: if 'yes' results are saved to disk. ------- OUTPUT: crossdecoding_results: a structure storing the results of the crossdecoding of valence. ``` -*fmri/code/emotion_decoding_and_coefficients_similarity_in_mpfc.m* ``` the emotion_decoding_and_coefficients_similarity_in_mpfc.m script performs the decoding of emotional instances from mpfc regression coefficients. ``` -*fmri/code/multiclass_classifier_performance.m* ``` the multiclass_classifier_performance.m function is used to estimate performance metrics in the context of multiclass classification usage: [macro_metrics, weighted_metrics, micro_metrics, single_class_metrics, classifier_errors] = multiclass_classifier_performance(confusion_matrix) ------- INPUT: confusion_matrix: a confusion matrix resulting from a classification procedure. ------- OUTPUT: macro_metrics: stores - in the following order - the macro accuracy, the macro precision, the macro recall and the macro f1 score for all the evaluated confusion matrices. weighted_metrics: stores - in the following order - the weighted average accuracy, precision, recall and f1 score for all the evaluated confusion matrices. the averages are weighted by the number of elements in each class. micro_metrics: in multiclass classification micro precision, recall and f1 score are the same number. this is what micro_metrics stores for each evaluated confusion matrix. single_class_metrics: stores - in the following order - the accuracy, precision, recall, and f1 score of each class and for all the evaluated confusion matrices. classifier_errors: stores - in the following order - the number of true positives, true negatives, false positives and false negatives of each class and for all the evaluated confusion matrices ``` -*fmri/code/neurosynth_vmpfc.m* ``` the neurosynth_vmpfc.m script estimates the relationship between average activity of vmpfc and the emotion model. ``` Emotion and perception are tightly intertwined, as affective experiences often arise from the appraisal of sensory information. Nonetheless, whether the brain encodes emotional instances using a sensory-specific code or in a more abstract manner is unclear. Here, we answer this question by measuring the association between emotion ratings collected during a unisensory or multisensory presentation of a full-length movie and brain activity recorded in typically-developed, congenitally blind and congenitally deaf participants. Emotional instances are encoded in a vast network encompassing sensory, prefrontal, and temporal cortices. Within this network, the ventromedial prefrontal cortex stores a categorical representation of emotion independent of modality and previous sensory experience, and the posterior superior temporal cortex maps the valence dimension using an abstract code. Sensory experience more than modality impacts how the brain organizes emotional information outside supramodal regions, suggesting the existence of a scaffold for the representation of emotional states where sensory inputs during development shape its functioning.

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ DRYAD; ZENODOarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    DRYAD; ZENODO
    Dataset . 2024
    License: CC 0
    Data sources: Datacite; ZENODO
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ DRYAD; ZENODOarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      DRYAD; ZENODO
      Dataset . 2024
      License: CC 0
      Data sources: Datacite; ZENODO
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Rumbeiha, Wilson;

    # Metabolomic profiles of acute and chronic ambient hydrogen sulfide exposure in a mouse model \# General information \## This README file was generated on 20204-01-04 by Dongsuk Kim. \## Author Information A. Principal Investigator Contact Information Name: Wilson Rumbeiha Institution: University of California at Davis Email: B. Associate or Co-investigator Contact Information Name: Dongsuk Kim Institution: University of California at Davis Email: \# Dataset contents: This data set contains 3 normalized metabolomics data in brainstem following acute and subchronic hydrogen sulfide exposures. \# SHARING/ACCESS INFORMATION 1\. Licenses/restrictions placed on the data: CC0 1.0 Universal (CC0 1.0) Public Domain 2\. Links to publications that cite or use the data: Dong-Suk Kim, Cristina MS Maldonado, Cecilia Giulivi, and Wilson K Rumbeiha (2024). Metabolomic signatures of brainstem in mice following acute and chronic hydrogen sulfide exposure. 3\. Links to other publicly accessible locations of the data: None 4\. Links/relationships to ancillary data sets: None 5\. Was data derived from another source? No 6\. Recommended citation for this dataset: Dong-Suk Kim, Cristina MS Maldonado, Cecilia Giulivi, and Wilson K Rumbeiha (2024). Data from: Metabolomic signatures of brainstem in mice following acute and chronic hydrogen sulfide exposure. Dryad Digital Repository. \# Experimental procedures: \## Exposure paradigm Hydrogen sulfide (H2S) is an environmental toxicant of health concern following acute or chronic human exposures. Male 6-8 week-old C57BL/6J mice were exposed by whole-body inhalation to 1000 ppm H2S for 45 min and euthanized at 5 min and 72 h for acute exposure. For subchronic study, mice were exposed to 5 ppm H2S 2 h/day, 5 days/week for 5 weeks. The brainstem was removed for metabolomic analysis. \## Metabolomics analysis The metabolomics analyses consisted of three assays, (1) primary metabolism by GC-TOF MS, (2) biogenic amines (hydrophilic compounds) by HILIC-MS/MS and (3) lipidomics by RPLC-MS/MS. Metabolomics were performed in West Coast Metabolomics Center, University of California at Davis, CA, USA. \# Results: 348, 311, and 565 known metabolites were detected and analyzed by primary metabolism, biogenic amines, and lipidomic metabolomics assays. 33, 19, and 46 metabolites were increased at 5 min and 72 h post acute H2S exposures and subchronic ambient H2S exposures, respectively, compared to room air control group. 22, 17, and 32 metabolites were decreased at 5 min and 72 h post acute H2S exposures and subchronic ambient H2S exposures, respectively, compared to room air control group. Acute H2S exposure decreased excitatory neurotransmitters aspartate and glutamate concentrations while the inhibitory neurotransmitter serotonin was increased. Glutamate and serotonin were also decreased after ambient H2S exposure. Branched-chain amino acids, fructose, and glucose were increased by acute H2S exposure. In ambient H2S exposure, glucose was decreased while MUFAs, PUFAs, inosine, and hypoxanthine were increased. Collectively, these results provide im-portant mechanistic clues of acute and subchronic ambient H2S poisonings and show that H2S alters neurotransmission homeostasis. \# Description of the data and file structure 1 PA data.txt contains normalized biogenic amines metabolomics data by biogenic amines assays. 2 PM data.txt contains normalized primary metabolisom metabolomics data by biogenic amines assays. 3 Lipid data.txt contains normalized lipidomic metabolomics data by biogenic amines assays. \## DATA-SPECIFIC INFORMATION FOR: PA data.txt 1\. number of headers: 36 2\. number of data (Identified metabolites):309 3\. Header list: identifier: Index number of identified metabolite Metabolite name: Common name of identified metabolite Species: Chemical ionization mode InChiKey: International Chemical Identifier MSI level: Metabolomics Standards Initiative m/z: Mass-to-charge ratio RT: Retention time (minutes) A - 1: Peak height of Room air normal control group for 1000 ppm H2S exposure - 1 A - 2: Peak height of Room air normal control group for 1000 ppm H2S exposure - 2 A - 3: Peak height of Room air normal control group for 1000 ppm H2S exposure - 3 A - 4: Peak height of Room air normal control group for 1000 ppm H2S exposure - 4 A - 5: Peak height of Room air normal control group for 1000 ppm H2S exposure - 5 B - 1: Peak height of 5 min post 1000 ppm H2S exposure - 1 B - 2: Peak height of 5 min post 1000 ppm H2S exposure - 2 B - 3: Peak height of 5 min post 1000 ppm H2S exposure - 3 B - 4: Peak height of 5 min post 1000 ppm H2S exposure - 4 C - 1: Peak height of 72 h post 1000 ppm H2S exposure - 1 C - 2: Peak height of 72 h post 1000 ppm H2S exposure - 2 C - 3: Peak height of 72 h post 1000 ppm H2S exposure - 3 C - 4: Peak height of 72 h post 1000 ppm H2S exposure - 4 C - 5: Peak height of 72 h post 1000 ppm H2S exposure - 5 D - 1: Peak height of Room air normal control group for 5 ppm H2S exposure - 1 D - 2: Peak height of Room air normal control group for 5 ppm H2S exposure - 2 D - 3: Peak height of Room air normal control group for 5 ppm H2S exposure - 3 D - 4: Peak height of Room air normal control group for 5 ppm H2S exposure - 4 E - 1: Peak height of 5 ppm H2S exposure - 1 E - 2: Peak height of 5 ppm H2S exposure - 2 E - 3: Peak height of 5 ppm H2S exposure - 3 E - 4: Peak height of 5 ppm H2S exposure - 4 E - 5: Peak height of 5 ppm H2S exposure - 4 MtdBlank001: Peak height of Blank - 1 MtdBlank002: Peak height of Blank - 2 MtdBlank003: Peak height of Blank - 3 PoolQC001: Peak height of Pooled quality control 1 PoolQC002: Peak height of Pooled quality control 2 PoolQC003: Peak height of Pooled quality control 3 4\. Abbreviation: na: Non-identified Inchikey \## DATA-SPECIFIC INFORMATION FOR: PM data.txt 1\. number of headers: 32 2\. number of data (Identified metabolites): 126 3\. Header list: Metabolite name: Name of identified metabolite ret.index: Retention time (minutes) quant mz: Mass-to-charge ratio mass spec: Mass spectra PubChem: ID of PubChem for the specific metabolite KEGG: ID of Kyoto Encyclopedia of Genes and Genomes InChI Key: International Chemical Identifier A - 1: Peak height of Room air normal control group for 1000 ppm H2S exposure - 1 A - 2: Peak height of Room air normal control group for 1000 ppm H2S exposure - 2 A - 3: Peak height of Room air normal control group for 1000 ppm H2S exposure - 3 A - 4: Peak height of Room air normal control group for 1000 ppm H2S exposure - 4 A - 5: Peak height of Room air normal control group for 1000 ppm H2S exposure - 5 B - 1: Peak height of 5 min post 1000 ppm H2S exposure - 1 B - 2: Peak height of 5 min post 1000 ppm H2S exposure - 2 B - 3: Peak height of 5 min post 1000 ppm H2S exposure - 3 B - 4: Peak height of 5 min post 1000 ppm H2S exposure - 4 C - 1: Peak height of 72 h post 1000 ppm H2S exposure - 1 C - 2: Peak height of 72 h post 1000 ppm H2S exposure - 2 C - 3: Peak height of 72 h post 1000 ppm H2S exposure - 3 C - 4: Peak height of 72 h post 1000 ppm H2S exposure - 4 C - 5: Peak height of 72 h post 1000 ppm H2S exposure - 5 D - 1: Peak height of Room air normal control group for 5 ppm H2S exposure - 1 D - 2: Peak height of Room air normal control group for 5 ppm H2S exposure - 2 D - 3: Peak height of Room air normal control group for 5 ppm H2S exposure - 3 D - 4: Peak height of Room air normal control group for 5 ppm H2S exposure - 4 E - 1: Peak height of 5 ppm H2S exposure - 1 E - 2: Peak height of 5 ppm H2S exposure - 2 E - 3: Peak height of 5 ppm H2S exposure - 3 E - 4: Peak height of 5 ppm H2S exposure - 4 E - 5: Peak height of 5 ppm H2S exposure - 4 pool_001: Peak height of Pooled quality control 1 pool_002: Peak height of Pooled quality control 2 4\. Abbreviation: na: Non-identified Pubchem or KEGG ID \## DATA-SPECIFIC INFORMATION FOR: Lipid data.txt 1\. number of headers: 38 2\. number of data (Identified metabolites): 565 3\. Header list: identifier: ID of identified metabolite name: Name of identified metabolite ion species: Ion species InChiKey: International Chemical Identifier m/z: Mass-to-charge ratio ESI mode: ESI mode ret.time: Retention time (minutes) A - 1: Peak height of Room air normal control group for 1000 ppm H2S exposure - 1 A - 2: Peak height of Room air normal control group for 1000 ppm H2S exposure - 2 A - 3: Peak height of Room air normal control group for 1000 ppm H2S exposure - 3 A - 4: Peak height of Room air normal control group for 1000 ppm H2S exposure - 4 A - 5: Peak height of Room air normal control group for 1000 ppm H2S exposure - 5 B - 1: Peak height of 5 min post 1000 ppm H2S exposure - 1 B - 2: Peak height of 5 min post 1000 ppm H2S exposure - 2 B - 3: Peak height of 5 min post 1000 ppm H2S exposure - 3 B - 4: Peak height of 5 min post 1000 ppm H2S exposure - 4 C - 1: Peak height of 72 h post 1000 ppm H2S exposure - 1 C - 2: Peak height of 72 h post 1000 ppm H2S exposure - 2 C - 3: Peak height of 72 h post 1000 ppm H2S exposure - 3 C - 4: Peak height of 72 h post 1000 ppm H2S exposure - 4 C - 5: Peak height of 72 h post 1000 ppm H2S exposure - 5 D - 1: Peak height of Room air normal control group for 5 ppm H2S exposure - 1 D - 2: Peak height of Room air normal control group for 5 ppm H2S exposure - 2 D - 3: Peak height of Room air normal control group for 5 ppm H2S exposure - 3 D - 4: Peak height of Room air normal control group for 5 ppm H2S exposure - 4 E - 1: Peak height of 5 ppm H2S exposure - 1 E - 2: Peak height of 5 ppm H2S exposure - 2 E - 3: Peak height of 5 ppm H2S exposure - 3 E - 4: Peak height of 5 ppm H2S exposure - 4 E - 5: Peak height of 5 ppm H2S exposure - 4 PoolQC001: Peak height of Pooled quality control 1 PoolQC002: Peak height of Pooled quality control 2 PoolQC003: Peak height of Pooled quality control 3 PoolQC004: Peak height of Pooled quality control 4 MtdBlank001: Peak height of Blank - 1 MtdBlank002: Peak height of Blank - 2 MtdBlank003: Peak height of Blank - 3 MtdBlank004: Peak height of Blank - 4 Missing data code: na Hydrogen sulfide (H2S) is an environmental toxicant of health concern following acute or chronic human exposures. Male 6-8 week-old C57BL/6J mice were exposed by whole-body inhalation to 1000 ppm H2S for 45 min and euthanized at 5 min and 72 h for acute exposure. For subchronic study, mice were exposed to 5 ppm H2S 2 h/day, 5 days/week for 5 weeks. The brainstem was removed for metabolomic analysis. The metabolomics analyses consisted of three assays, (1) primary metabolism by GC-TOF MS, (2) biogenic amines (hydrophilic compounds) by HILIC-MS/MS and (3) lipidomics by RPLC-MS/MS. Metabolomics were performed in West Coast Metabolomics Center, University of California at Davis, CA, USA. 348, 311, and 565 known metabolites were detected and analyzed by primary metabolism, biogenic amines, and lipidomic metabolomics assays. 33, 19, and 46 metabolites were increased at 5 min and 72 h post acute H2S exposures and subchronic ambient H2S exposures, respectively, compared to room air control group. 22, 17, and 32 metabolites were decreased at 5 min and 72 h post acute H2S exposures and subchronic ambient H2S exposures, respectively, compared to room air control group. Acute H2S exposure decreased excitatory neurotransmitters aspartate and glutamate concentrations while the inhibitory neurotransmitter serotonin was increased. Glutamate and serotonin were also decreased after ambient H2S exposure. Branched-chain amino acids, fructose, and glucose were increased by acute H2S exposure. In ambient H2S exposure, glucose was decreased while MUFAs, PUFAs, inosine, and hypoxanthine were increased. Collectively, these results provide important mechanistic clues of acute and subchronic ambient H2S poisonings and show that H2S alters neurotransmission homeostasis. Exposure paradigmHydrogen sulfide (H2S) is an environmental toxicant of health concern following acute or chronic human exposures. Male 6-8 week-old C57BL/6J mice were exposed by whole-body inhalation to 1000 ppm H2S for 45 min and euthanized at 5 min and 72 h for acute exposure. For subchronic study, mice were exposed to 5 ppm H2S 2 h/day, 5 days/week for 5 weeks. The brainstem was removed for metabolomic analysis. Metabolomics analysisThe metabolomics analyses consisted of three assays, (1) primary metabolism by GC-TOF MS, (2) biogenic amines (hydrophilic compounds) by HILIC-MS/MS and (3) lipidomics by RPLC-MS/MS. Metabolomics were performed in West Coast Metabolomics Center (WCMC), University of California at Davis, CA, USA. Raw data were normalized by the standard normalization in WCMC. 

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ DRYAD; ZENODOarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    DRYAD; ZENODO
    Dataset . 2024
    License: CC 0
    Data sources: Datacite; ZENODO
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ DRYAD; ZENODOarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      DRYAD; ZENODO
      Dataset . 2024
      License: CC 0
      Data sources: Datacite; ZENODO
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Shan, Zack;

    We provided the dataset of pre-processed anatomic and functional brain MR images from 10 healthy controls. The dataset can be used to replicate the results of the manuscript titled 'Functional MRI of the brain stem for assessing its autonomic functions: from imaging parameters and analysis to functional atlas.' This manuscript presented an optimised functional imaging brainstem imaging protocol (FIBS). Skulls were removed from the shared MRI images, and brain images were normalised to the Montreal Neurological Institute (MNI) space to protect participants' privacy. Details of pre-processing were provided in the paper mentioned above. The atlas includes 12 regions of interest (ROIs) in the brain stem involving automatic controls. This dataset could potentially be used to: 1. compare temporal signal-to-noise ratios among different imaging protocols; 2. provide the brain stem anatomic locations involved in autonomic controls; 3. add to the normal control database for brain stem studies.

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    ZENODO
    Dataset . 2024
    License: CC BY
    Data sources: ZENODO
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      ZENODO
      Dataset . 2024
      License: CC BY
      Data sources: ZENODO
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    Authors: Takashima, A. (Atsuko); Francesca Carota; Schoots, V.C.; Redmann, A.; +2 Authors

    When preparing to name an object, semantic knowledge about the object and its attributes is activated, including perceptual properties. It is unclear, however, whether semantic attribute activation contributes to lexical access or is a consequence of activating a concept irrespective of whether that concept is to be named or not. In this study, we measured neural responses using fMRI while participants named objects that are typically green or red, presented in black line drawings. Furthermore, participants underwent two other tasks with the same objects, color naming and semantic judgment, to see if the activation pattern we observe during picture naming is (a) similar to that of a task that requires accessing the color attribute and (b) distinct from that of a task that requires accessing the concept but not its name or color. We used representational similarity analysis to detect brain areas that show similar patterns within the same color category, but show different patterns across the two color categories. In all three tasks, activation in the bilateral fusiform gyri (“Human V4”) correlated with a representational model encoding the red–green distinction weighted by the importance of color feature for the different objects. This result suggests that when seeing objects whose color attribute is highly diagnostic, color knowledge about the objects is retrieved irrespective of whether the color or the object itself have to be named.

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ https://doi.org/10.3...arrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    https://doi.org/10.34973/agnh-...
    Dataset . 2024
    License: CC 0
    Data sources: Datacite
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ https://doi.org/10.3...arrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      https://doi.org/10.34973/agnh-...
      Dataset . 2024
      License: CC 0
      Data sources: Datacite
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.
  • image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/

    This dataset comprises a curated collection of articles. Each article is categorized according to four key criteria: 1) the specific type of article, 2) the year of publication, 3) the variety of neuroscience tools employed (applicable only to empirical studies), and 4) its relevance and application in the field of marketing.For specifics on the data see: Alvino, L., Pavone, L., Abhishta, A., & Robben, H. (2020). Picking Your Brains: Where and How Neuroscience Tools Can Enhance Marketing Research. Frontiers in Neuroscience, 14, Article 577666. https://doi.org/10.3389/fnins.2020.577666

    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ University of Twente...arrow_drop_down
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    4TU.ResearchData
    Dataset . 2023
    License: CC BY NC
    Data sources: 4TU.ResearchData
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    4TU.ResearchData
    Dataset . 2024
    License: CC BY NC
    Data sources: 4TU.ResearchData
    image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
    4TU.ResearchData
    Dataset . 2024
    License: CC BY NC
    Data sources: 4TU.ResearchData
    4TU.ResearchData | science.engineering.design
    Dataset . 2024
    License: CC BY NC
    Data sources: Datacite
    4TU.ResearchData | science.engineering.design
    Dataset . 2024
    License: CC BY NC
    Data sources: Datacite
    4TU.ResearchData | science.engineering.design
    Dataset . 2023
    License: CC BY NC
    Data sources: Datacite
    addClaim

    This Research product is the result of merged Research products in OpenAIRE.

    You have already added works in your ORCID record related to the merged Research product.
    0
    citations0
    popularityAverage
    influenceAverage
    impulseAverage
    BIP!Powered by BIP!
    more_vert
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ University of Twente...arrow_drop_down
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      4TU.ResearchData
      Dataset . 2023
      License: CC BY NC
      Data sources: 4TU.ResearchData
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      4TU.ResearchData
      Dataset . 2024
      License: CC BY NC
      Data sources: 4TU.ResearchData
      image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
      4TU.ResearchData
      Dataset . 2024
      License: CC BY NC
      Data sources: 4TU.ResearchData
      4TU.ResearchData | science.engineering.design
      Dataset . 2024
      License: CC BY NC
      Data sources: Datacite
      4TU.ResearchData | science.engineering.design
      Dataset . 2024
      License: CC BY NC
      Data sources: Datacite
      4TU.ResearchData | science.engineering.design
      Dataset . 2023
      License: CC BY NC
      Data sources: Datacite
      addClaim

      This Research product is the result of merged Research products in OpenAIRE.

      You have already added works in your ORCID record related to the merged Research product.