Index of /sereno/csurf/fsaverage-labels/CsurfMaps1-parcellation/mapdata-labels

 NameLast modifiedSizeDescription

 Parent Directory  -  
 lh-mapsvisADJPHA.label2022-08-01 20:12 1.9M 
 lh-mapsvisADJPHA.label.gii2023-06-07 21:48 217K 
 rh-mapsvisADJPHA.label2022-08-01 20:12 1.8M 
 rh-mapsvisADJPHA.label.gii2023-06-07 21:48 214K 

########################################################################################################
https://pages.ucsd.edu/~msereno/csurf/fsaverage-labels/CsurfMaps1-parcellation/mapdata-labels/README.txt
########################################################################################################

AVERAGE RETINOTOPY LABELS

These files:

    lh-mapsvisADJPHA.label
    rh-mapsvisADJPHA.label

contain complex-valued (amplitude=1), 6-args-per line, freesurfer label
files for subject fsaverage:

  #!ascii , from subject fsaverage (complex data, amp=1, always right hemifield)
  42236
  1 -16.135 -66.123 58.725 0.999754 0.022179
  6 -35.970 -85.399 -1.931 0.893934 -0.448198
  7 -21.556 -61.110 9.182 0.763055 -0.646333
  ...

The freesurfer label files have also been converted to a real-valued
GIFTI label by first converting the complex values above (last 2 args)
to radians with atan2, and then using mris_convert:

  lh-mapsvisADJPHA.label.gii
  rh-mapsvisADJPHA.label.gii

There are subtle pitfalls to consider when averaging complex-valued
retinotopic data across subjects.

The biggest problem is due to slight remaining spatial offsets between
the maps in different subjects. This can arise from slight differences in
sulcal morphology, areal size, exact areal orientation, and even total
area number (all humans are not guaranteed to have the same number of
visual areas). All these factors result in a decrease in the range of
polar angles ('regression to the horizontal meridian') in a cross-subject
retinotopic average. Since higher level areas are generally smaller,
the same spatial offset in a higher area will have a bigger effect of
this kind than will occur in a larger, earlier area.

Surface-based smoothing will also reduce the range of polar angles, so
smoothing here has been kept to an absolute minimum (a 2D kernel width
well under the functional voxel width).

Another problem is that our direct view setup, required to get a large
field of view (particularly helpful for suppressing anomalous off-the-edge
effects), requires a somewhat downward gaze. In areas with strong eye
position feedback, this might have the result of moving maps toward the
lower field, as we experimentally observed in Sereno and Huang (2006)
by having people fixate the center of a video in a circular aperture
that was moving around the face. This was a roughly constant retinotopic
stimulus, but it generated data that looked like a standard polar angle
retinotopy with central fixation.

The first consideration makes it difficult to directly compare polar
angles in unsmoothed single subject mapping data with a cross-subject
average.

Some possible ways to correct these biases include searchlight-based polar
angle range normalization, or cortical area size-based normalization.
Another option, which I have coded in csurf (but have been too
embarrassed to ever use :-} ), is to drive the intersubject cortical
surface alignment by individual subject map coordinates.  Obviously,
this, approach will 'work' (make the maps look better, and with a broader
range of polar angles); but the double-dipping aspects scared me off.
It can also introduce noise if between subject areal size differences
are big enough to have one person's upper field overlap another person's
upper field from a different cortical area.

So, after all that said, for the current data set, the polar angle range
in posterior areas was linearly expanded (centered on the horizontal
meridian) by 20% to make V1 and V2 approximately 'correct'.  Moving
anteriorly, the polar angle range expansion reached 40% just in front
of MT. This makes it easier to see the map structure in the smaller,
more anterior areas.