Compare commits
13 Commits
v1.1.0
...
953ea90c67
| Author | SHA1 | Date | |
|---|---|---|---|
| 953ea90c67 | |||
| 20b255321b | |||
| b5afcec37d | |||
| 5361f6ea21 | |||
| ee023c26c1 | |||
| 06c9ff0ecf | |||
| 542dd85a78 | |||
| 3e0f70ea49 | |||
| d6c71e0ab2 | |||
| 87073fb218 | |||
| 3d0fbd5c5e | |||
| 3f38f5a978 | |||
| 0607ced61e |
67
changelog.md
67
changelog.md
@@ -1,6 +1,71 @@
|
|||||||
|
# Next Release
|
||||||
|
|
||||||
|
- Fixed Windows saves not being able to be opened by a Mac (hopefully the other way too!)
|
||||||
|
- Added the option to right click loaded snirf files to reveal them in a file browser or delete them if they are no longer desired
|
||||||
|
- Changed the way folders are opened to store the files seperately rather than the folder as a whole to allow for the removal of files
|
||||||
|
- Fixed issues with dropdowns and bubbles not populating correctly when opening a single file and temporarily removed the option to open multiple folders
|
||||||
|
- Improved crash handling and the message that is displayed to the user if the application crashes
|
||||||
|
- Progress bar will now colour the stage that fails as red if a file fails during processing
|
||||||
|
- A warning message will be displayed when a file fails to process with information on what went wrong. This message does not halt the rest of the processing of the other files
|
||||||
|
- Fixed the number of rectangles in the progress bar to 20 (was incorrect in v1.1.1)
|
||||||
|
- Added validation to ensure loaded files do not have 2 dimensional data when clicking process to prevent inaccurate results from being generated
|
||||||
|
- Added more metadata information to the top left information panel
|
||||||
|
- Changed the Status Bar message when processing is complete to state how many were successful and how many were not
|
||||||
|
- Added a clickable link below the selected file's metadata explaining the independent parameters and why they are useful
|
||||||
|
- Updated some tooltips to provide better, more accurate information
|
||||||
|
- Added details about the processing steps and their order into the user guide
|
||||||
|
|
||||||
|
|
||||||
|
# Version 1.1.4
|
||||||
|
|
||||||
|
- Fixed some display text to now display the correct information
|
||||||
|
- A new option under Analysis has been added to export the data from a specified participant as a csv file. Fixes [Issue 19](https://git.research.dezeeuw.ca/tyler/flares/issues/19), [Issue 27](https://git.research.dezeeuw.ca/tyler/flares/issues/27)
|
||||||
|
- Added 2 new parameters - TIME_WINDOW_START and TIME_WINDOW_END. Fixes [Issue 29](https://git.research.dezeeuw.ca/tyler/flares/issues/29)
|
||||||
|
- These parameters affect the visualization of the significance and contrast images but do not change the total time modeled underneath
|
||||||
|
- Fixed the duration of annotations edited from a BORIS file from 0 seconds to their proper duration
|
||||||
|
- Added the annotation information to each participant under their "File information" window
|
||||||
|
- Fixed Macs not being able to save snirfs attempting to be updated from BORIS files, and in general the updated files not respecting the path chosen by the user
|
||||||
|
|
||||||
|
|
||||||
|
# Version 1.1.3
|
||||||
|
|
||||||
|
- Added back the ability to use the fOLD dataset. Fixes [Issue 23](https://git.research.dezeeuw.ca/tyler/flares/issues/23)
|
||||||
|
- 5th option has been added under Analysis to get to fOLD channels per participant
|
||||||
|
- Added an option to cancel the running process. Fixes [Issue 15](https://git.research.dezeeuw.ca/tyler/flares/issues/15)
|
||||||
|
- Prevented graph images from showing when participants are being processed. Fixes [Issue 24](https://git.research.dezeeuw.ca/tyler/flares/issues/24)
|
||||||
|
- Allow the option to remove all events of a type from all loaded snirfs. Fixes [Issue 25](https://git.research.dezeeuw.ca/tyler/flares/issues/25)
|
||||||
|
- Added new icons in the menu bar
|
||||||
|
- Added a terminal to interact with the app in a more command-like form
|
||||||
|
- Currently the terminal has no functionality but some features for batch operations will be coming soon!
|
||||||
|
- Inter-Group viewer now has the option to visualize the average response on the brain of all participants in the group. Fixes [Issue 26](https://git.research.dezeeuw.ca/tyler/flares/issues/24)
|
||||||
|
- Fixed the description under "Update events in snirf file..."
|
||||||
|
|
||||||
|
|
||||||
|
# Version 1.1.2
|
||||||
|
|
||||||
|
- Fixed incorrect colormaps being applied
|
||||||
|
- Added functionality to utilize external event markers from a file. Fixes [Issue 6](https://git.research.dezeeuw.ca/tyler/flares/issues/6)
|
||||||
|
|
||||||
|
|
||||||
|
# Version 1.1.1
|
||||||
|
|
||||||
|
- Fixed the number of rectangles in the progress bar to 19
|
||||||
|
- Fixed a crash when attempting to load a brain image on Windows
|
||||||
|
- Removed hardcoded event annotations. Fixes [Issue 16](https://git.research.dezeeuw.ca/tyler/flares/issues/16)
|
||||||
|
|
||||||
|
|
||||||
# Version 1.1.0
|
# Version 1.1.0
|
||||||
|
|
||||||
- Changelog details coming soon
|
- Revamped the Analysis window
|
||||||
|
- 4 Options of Participant, Participant Brain, Inter-Group, and Cross Group Brain are available.
|
||||||
|
- Customization is present to query different participants, images, events, brains, etc.
|
||||||
|
- Removed preprocessing options and reorganized their order to correlate with the actual order.
|
||||||
|
- Most preprocessing options removed will be coming back soon
|
||||||
|
- Added a group option when clicking on a participant's file
|
||||||
|
- If no group is specified, the participant will be added to the "Default" group
|
||||||
|
- Added option to update the optode positions in a snirf file from the Options menu (F6)
|
||||||
|
- Fixed [Issue 3](https://git.research.dezeeuw.ca/tyler/flares/issues/3), [Issue 4](https://git.research.dezeeuw.ca/tyler/flares/issues/4), [Issue 17](https://git.research.dezeeuw.ca/tyler/flares/issues/17), [Issue 21](https://git.research.dezeeuw.ca/tyler/flares/issues/21), [Issue 22](https://git.research.dezeeuw.ca/tyler/flares/issues/22)
|
||||||
|
|
||||||
|
|
||||||
# Version 1.0.1
|
# Version 1.0.1
|
||||||
|
|
||||||
|
|||||||
590
flares.py
590
flares.py
@@ -48,9 +48,14 @@ from statsmodels.stats.multitest import multipletests
|
|||||||
from scipy import stats
|
from scipy import stats
|
||||||
from scipy.spatial.distance import cdist
|
from scipy.spatial.distance import cdist
|
||||||
|
|
||||||
|
# Backen visualization needed to be defined for pyinstaller
|
||||||
|
import pyvistaqt # type: ignore
|
||||||
|
# import vtkmodules.util.data_model
|
||||||
|
# import vtkmodules.util.execution_model
|
||||||
|
|
||||||
# External library imports for mne
|
# External library imports for mne
|
||||||
from mne import (
|
from mne import (
|
||||||
EvokedArray, SourceEstimate, Info, Epochs, Label,
|
EvokedArray, SourceEstimate, Info, Epochs, Label, Annotations,
|
||||||
events_from_annotations, read_source_spaces,
|
events_from_annotations, read_source_spaces,
|
||||||
stc_near_sensors, pick_types, grand_average, get_config, set_config, read_labels_from_annot
|
stc_near_sensors, pick_types, grand_average, get_config, set_config, read_labels_from_annot
|
||||||
) # type: ignore
|
) # type: ignore
|
||||||
@@ -125,6 +130,13 @@ TDDR: bool
|
|||||||
|
|
||||||
ENHANCE_NEGATIVE_CORRELATION: bool
|
ENHANCE_NEGATIVE_CORRELATION: bool
|
||||||
|
|
||||||
|
SHORT_CHANNEL: bool
|
||||||
|
|
||||||
|
REMOVE_EVENTS: list
|
||||||
|
|
||||||
|
TIME_WINDOW_START: int
|
||||||
|
TIME_WINDOW_END: int
|
||||||
|
|
||||||
VERBOSITY = True
|
VERBOSITY = True
|
||||||
|
|
||||||
# FIXME: Shouldn't need each ordering - just order it before checking
|
# FIXME: Shouldn't need each ordering - just order it before checking
|
||||||
@@ -171,6 +183,10 @@ REQUIRED_KEYS: dict[str, Any] = {
|
|||||||
"PSP_TIME_WINDOW": int,
|
"PSP_TIME_WINDOW": int,
|
||||||
"PSP_THRESHOLD": float,
|
"PSP_THRESHOLD": float,
|
||||||
|
|
||||||
|
"SHORT_CHANNEL": bool,
|
||||||
|
"REMOVE_EVENTS": list,
|
||||||
|
"TIME_WINDOW_START": int,
|
||||||
|
"TIME_WINDOW_END": int
|
||||||
# "REJECT_PAIRS": bool,
|
# "REJECT_PAIRS": bool,
|
||||||
# "FORCE_DROP_ANNOTATIONS": list,
|
# "FORCE_DROP_ANNOTATIONS": list,
|
||||||
# "FILTER_LOW_PASS": float,
|
# "FILTER_LOW_PASS": float,
|
||||||
@@ -252,40 +268,42 @@ def set_metadata(file_path, metadata: dict[str, Any]) -> None:
|
|||||||
val = file_metadata.get(key, None)
|
val = file_metadata.get(key, None)
|
||||||
if val not in (None, '', [], {}, ()): # check for "empty" values
|
if val not in (None, '', [], {}, ()): # check for "empty" values
|
||||||
globals()[key] = val
|
globals()[key] = val
|
||||||
|
from queue import Empty # This works with multiprocessing.Manager().Queue()
|
||||||
|
|
||||||
|
|
||||||
def gui_entry(config: dict[str, Any], gui_queue: Queue, progress_queue: Queue) -> None:
|
def gui_entry(config: dict[str, Any], gui_queue: Queue, progress_queue: Queue) -> None:
|
||||||
try:
|
def forward_progress():
|
||||||
# Start a thread to forward progress messages back to GUI
|
while True:
|
||||||
def forward_progress():
|
try:
|
||||||
while True:
|
msg = progress_queue.get(timeout=1)
|
||||||
try:
|
if msg == "__done__":
|
||||||
msg = progress_queue.get(timeout=1)
|
break
|
||||||
if msg == "__done__":
|
gui_queue.put(msg)
|
||||||
break
|
except Empty:
|
||||||
gui_queue.put(msg)
|
continue
|
||||||
except:
|
except Exception as e:
|
||||||
continue
|
gui_queue.put({
|
||||||
|
"type": "error",
|
||||||
|
"error": f"Forwarding thread crashed: {e}",
|
||||||
|
"traceback": traceback.format_exc()
|
||||||
|
})
|
||||||
|
break
|
||||||
|
|
||||||
t = threading.Thread(target=forward_progress, daemon=True)
|
t = threading.Thread(target=forward_progress, daemon=True)
|
||||||
t.start()
|
t.start()
|
||||||
|
|
||||||
|
try:
|
||||||
file_paths = config['SNIRF_FILES']
|
file_paths = config['SNIRF_FILES']
|
||||||
file_params = config['PARAMS']
|
file_params = config['PARAMS']
|
||||||
file_metadata = config['METADATA']
|
file_metadata = config['METADATA']
|
||||||
|
|
||||||
max_workers = file_params.get("MAX_WORKERS", int(os.cpu_count()/4))
|
max_workers = file_params.get("MAX_WORKERS", int(os.cpu_count()/4))
|
||||||
|
|
||||||
# Run the actual processing, with progress_queue passed down
|
results = process_multiple_participants(
|
||||||
print("actual call")
|
file_paths, file_params, file_metadata, progress_queue, max_workers
|
||||||
results = process_multiple_participants(file_paths, file_params, file_metadata, progress_queue, max_workers)
|
)
|
||||||
|
|
||||||
# Signal end of progress
|
|
||||||
progress_queue.put("__done__")
|
|
||||||
t.join()
|
|
||||||
|
|
||||||
gui_queue.put({"success": True, "result": results})
|
gui_queue.put({"success": True, "result": results})
|
||||||
|
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
gui_queue.put({
|
gui_queue.put({
|
||||||
"success": False,
|
"success": False,
|
||||||
@@ -293,6 +311,14 @@ def gui_entry(config: dict[str, Any], gui_queue: Queue, progress_queue: Queue) -
|
|||||||
"traceback": traceback.format_exc()
|
"traceback": traceback.format_exc()
|
||||||
})
|
})
|
||||||
|
|
||||||
|
finally:
|
||||||
|
# Always send done to the thread and avoid hanging
|
||||||
|
try:
|
||||||
|
progress_queue.put("__done__")
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
t.join(timeout=5) # prevent permanent hang
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def process_participant_worker(args):
|
def process_participant_worker(args):
|
||||||
@@ -327,9 +353,16 @@ def process_multiple_participants(file_paths, file_params, file_metadata, progre
|
|||||||
try:
|
try:
|
||||||
file_path, result, error = future.result()
|
file_path, result, error = future.result()
|
||||||
if error:
|
if error:
|
||||||
print(f"Error processing {file_path}: {error[0]}")
|
error_message, error_traceback = error
|
||||||
print(error[1])
|
if progress_queue:
|
||||||
|
progress_queue.put({
|
||||||
|
"type": "error",
|
||||||
|
"file": file_path,
|
||||||
|
"error": error_message,
|
||||||
|
"traceback": error_traceback
|
||||||
|
})
|
||||||
continue
|
continue
|
||||||
|
|
||||||
results_by_file[file_path] = result
|
results_by_file[file_path] = result
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"Unexpected error processing {file_path}: {e}")
|
print(f"Unexpected error processing {file_path}: {e}")
|
||||||
@@ -1037,7 +1070,8 @@ def filter_the_data(raw_haemo):
|
|||||||
average=True, xscale="log", color="r", show=False, amplitude=False
|
average=True, xscale="log", color="r", show=False, amplitude=False
|
||||||
)
|
)
|
||||||
|
|
||||||
raw_haemo = raw_haemo.filter(l_freq=None, h_freq=0.4, h_trans_bandwidth=0.2)
|
#raw_haemo = raw_haemo.filter(l_freq=None, h_freq=0.4, h_trans_bandwidth=0.2)
|
||||||
|
raw_haemo = raw_haemo.filter(0.05, 0.7, h_trans_bandwidth=0.2, l_trans_bandwidth=0.02)
|
||||||
|
|
||||||
raw_haemo.compute_psd(fmax=2).plot(
|
raw_haemo.compute_psd(fmax=2).plot(
|
||||||
average=True, xscale="log", axes=fig_filter.axes, color="g", amplitude=False, show=False
|
average=True, xscale="log", axes=fig_filter.axes, color="g", amplitude=False, show=False
|
||||||
@@ -1066,7 +1100,7 @@ def epochs_calculations(raw_haemo, events, event_dict):
|
|||||||
|
|
||||||
# Plot drop log
|
# Plot drop log
|
||||||
# TODO: Why show this if we never use epochs2?
|
# TODO: Why show this if we never use epochs2?
|
||||||
fig_epochs_dropped = epochs2.plot_drop_log()
|
fig_epochs_dropped = epochs2.plot_drop_log(show=False)
|
||||||
fig_epochs.append(("fig_epochs_dropped", fig_epochs_dropped))
|
fig_epochs.append(("fig_epochs_dropped", fig_epochs_dropped))
|
||||||
|
|
||||||
# Plot for each condition
|
# Plot for each condition
|
||||||
@@ -1100,7 +1134,7 @@ def epochs_calculations(raw_haemo, events, event_dict):
|
|||||||
evokeds3 = []
|
evokeds3 = []
|
||||||
colors = []
|
colors = []
|
||||||
conditions = list(epochs.event_id.keys())
|
conditions = list(epochs.event_id.keys())
|
||||||
cmap = plt.cm.get_cmap("tab10", len(conditions))
|
cmap = plt.get_cmap("tab10", len(conditions))
|
||||||
|
|
||||||
for idx, cond in enumerate(conditions):
|
for idx, cond in enumerate(conditions):
|
||||||
evoked = epochs[cond].average(picks="hbo")
|
evoked = epochs[cond].average(picks="hbo")
|
||||||
@@ -1120,16 +1154,20 @@ def epochs_calculations(raw_haemo, events, event_dict):
|
|||||||
fig.legend(lines, conditions, loc="lower right")
|
fig.legend(lines, conditions, loc="lower right")
|
||||||
fig_epochs.append(("evoked_topo", help)) # Store with a unique name
|
fig_epochs.append(("evoked_topo", help)) # Store with a unique name
|
||||||
|
|
||||||
# Evoked response for specific condition ("Reach")
|
unique_annotations = set(raw_haemo.annotations.description)
|
||||||
evoked_stim1 = epochs['Reach'].average()
|
|
||||||
|
|
||||||
fig_evoked_hbo = evoked_stim1.copy().pick(picks='hbo').plot(time_unit='s', show=False)
|
for cond in unique_annotations:
|
||||||
fig_evoked_hbr = evoked_stim1.copy().pick(picks='hbr').plot(time_unit='s', show=False)
|
|
||||||
|
|
||||||
fig_epochs.append(("fig_evoked_hbo", fig_evoked_hbo)) # Store with a unique name
|
|
||||||
fig_epochs.append(("fig_evoked_hbr", fig_evoked_hbr)) # Store with a unique name
|
|
||||||
|
|
||||||
print("Evoked HbO peak amplitude:", evoked_stim1.copy().pick(picks='hbo').data.max())
|
# Evoked response for specific condition ("Activity")
|
||||||
|
evoked_stim1 = epochs[cond].average()
|
||||||
|
|
||||||
|
fig_evoked_hbo = evoked_stim1.copy().pick(picks='hbo').plot(time_unit='s', show=False)
|
||||||
|
fig_evoked_hbr = evoked_stim1.copy().pick(picks='hbr').plot(time_unit='s', show=False)
|
||||||
|
|
||||||
|
fig_epochs.append((f"fig_evoked_hbo_{cond}", fig_evoked_hbo)) # Store with a unique name
|
||||||
|
fig_epochs.append((f"fig_evoked_hbr_{cond}", fig_evoked_hbr)) # Store with a unique name
|
||||||
|
|
||||||
|
print("Evoked HbO peak amplitude:", evoked_stim1.copy().pick(picks='hbo').data.max())
|
||||||
|
|
||||||
evokeds = {}
|
evokeds = {}
|
||||||
for condition in epochs2.event_id:
|
for condition in epochs2.event_id:
|
||||||
@@ -1200,26 +1238,36 @@ def epochs_calculations(raw_haemo, events, event_dict):
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def make_design_matrix(raw_haemo, short_chans):
|
def make_design_matrix(raw_haemo, short_chans):
|
||||||
|
|
||||||
raw_haemo.resample(1, npad="auto")
|
raw_haemo.resample(1, npad="auto")
|
||||||
short_chans.resample(1)
|
|
||||||
raw_haemo._data = raw_haemo._data * 1e6
|
raw_haemo._data = raw_haemo._data * 1e6
|
||||||
# 2) Create design matrix
|
# 2) Create design matrix
|
||||||
design_matrix = make_first_level_design_matrix(
|
if SHORT_CHANNEL:
|
||||||
raw=raw_haemo,
|
short_chans.resample(1)
|
||||||
hrf_model='fir',
|
design_matrix = make_first_level_design_matrix(
|
||||||
stim_dur=0.5,
|
raw=raw_haemo,
|
||||||
fir_delays=range(15),
|
hrf_model='fir',
|
||||||
drift_model='cosine',
|
stim_dur=0.5,
|
||||||
high_pass=0.01,
|
fir_delays=range(15),
|
||||||
oversampling=1,
|
drift_model='cosine',
|
||||||
min_onset=-125,
|
high_pass=0.01,
|
||||||
add_regs=short_chans.get_data().T,
|
oversampling=1,
|
||||||
add_reg_names=short_chans.ch_names
|
min_onset=-125,
|
||||||
)
|
add_regs=short_chans.get_data().T,
|
||||||
|
add_reg_names=short_chans.ch_names
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
design_matrix = make_first_level_design_matrix(
|
||||||
|
raw=raw_haemo,
|
||||||
|
hrf_model='fir',
|
||||||
|
stim_dur=0.5,
|
||||||
|
fir_delays=range(15),
|
||||||
|
drift_model='cosine',
|
||||||
|
high_pass=0.01,
|
||||||
|
oversampling=1,
|
||||||
|
min_onset=-125,
|
||||||
|
)
|
||||||
|
|
||||||
print(design_matrix.head())
|
print(design_matrix.head())
|
||||||
print(design_matrix.columns)
|
print(design_matrix.columns)
|
||||||
@@ -1232,10 +1280,6 @@ def make_design_matrix(raw_haemo, short_chans):
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def generate_montage_locations():
|
def generate_montage_locations():
|
||||||
"""Get standard MNI montage locations in dataframe.
|
"""Get standard MNI montage locations in dataframe.
|
||||||
|
|
||||||
@@ -1452,9 +1496,15 @@ def resource_path(relative_path):
|
|||||||
|
|
||||||
def fold_channels(raw: BaseRaw) -> None:
|
def fold_channels(raw: BaseRaw) -> None:
|
||||||
|
|
||||||
|
# if getattr(sys, 'frozen', False):
|
||||||
|
path = os.path.expanduser("~") + "/mne_data/fOLD/fOLD-public-master/Supplementary"
|
||||||
|
logger.info(path)
|
||||||
|
set_config('MNE_NIRS_FOLD_PATH', resource_path(path)) # type: ignore
|
||||||
|
|
||||||
# Locate the fOLD excel files
|
# # Locate the fOLD excel files
|
||||||
set_config('MNE_NIRS_FOLD_PATH', resource_path("../../mne_data/fOLD/fOLD-public-master/Supplementary")) # type: ignore
|
# else:
|
||||||
|
# logger.info("yabba")
|
||||||
|
# set_config('MNE_NIRS_FOLD_PATH', resource_path("../../mne_data/fOLD/fOLD-public-master/Supplementary")) # type: ignore
|
||||||
|
|
||||||
output = None
|
output = None
|
||||||
|
|
||||||
@@ -1516,8 +1566,8 @@ def fold_channels(raw: BaseRaw) -> None:
|
|||||||
"Brain_Outside",
|
"Brain_Outside",
|
||||||
]
|
]
|
||||||
|
|
||||||
cmap1 = plt.cm.get_cmap('tab20') # First 20 colors
|
cmap1 = plt.get_cmap('tab20') # First 20 colors
|
||||||
cmap2 = plt.cm.get_cmap('tab20b') # Next 20 colors
|
cmap2 = plt.get_cmap('tab20b') # Next 20 colors
|
||||||
|
|
||||||
# Combine the colors from both colormaps
|
# Combine the colors from both colormaps
|
||||||
colors = [cmap1(i) for i in range(20)] + [cmap2(i) for i in range(20)] # Total 40 colors
|
colors = [cmap1(i) for i in range(20)] + [cmap2(i) for i in range(20)] # Total 40 colors
|
||||||
@@ -1593,6 +1643,7 @@ def fold_channels(raw: BaseRaw) -> None:
|
|||||||
for ax in axes[len(hbo_channel_names):]:
|
for ax in axes[len(hbo_channel_names):]:
|
||||||
ax.axis('off')
|
ax.axis('off')
|
||||||
|
|
||||||
|
plt.show()
|
||||||
return fig, legend_fig
|
return fig, legend_fig
|
||||||
|
|
||||||
|
|
||||||
@@ -1600,153 +1651,158 @@ def fold_channels(raw: BaseRaw) -> None:
|
|||||||
|
|
||||||
def individual_significance(raw_haemo, glm_est):
|
def individual_significance(raw_haemo, glm_est):
|
||||||
|
|
||||||
|
fig_individual_significances = [] # List to store figures
|
||||||
|
|
||||||
# TODO: BAD!
|
# TODO: BAD!
|
||||||
cha = glm_est.to_dataframe()
|
cha = glm_est.to_dataframe()
|
||||||
|
|
||||||
ch_summary = cha.query("Condition.str.startswith('Reach_delay_') and Chroma == 'hbo'", engine='python')
|
unique_annotations = set(raw_haemo.annotations.description)
|
||||||
|
|
||||||
print(ch_summary.head())
|
for cond in unique_annotations:
|
||||||
|
|
||||||
channel_averages = ch_summary.groupby('ch_name')['theta'].mean().reset_index()
|
ch_summary = cha.query(f"Condition.str.startswith('{cond}_delay_') and Chroma == 'hbo'", engine='python')
|
||||||
print(channel_averages.head())
|
|
||||||
|
print(ch_summary.head())
|
||||||
|
|
||||||
|
channel_averages = ch_summary.groupby('ch_name')['theta'].mean().reset_index()
|
||||||
|
print(channel_averages.head())
|
||||||
|
|
||||||
|
|
||||||
reach_ch_summary = ch_summary.query(
|
activity_ch_summary = ch_summary.query(
|
||||||
"Chroma == 'hbo' and Condition.str.startswith('Reach_delay_')", engine='python'
|
f"Chroma == 'hbo' and Condition.str.startswith('{cond}_delay_')", engine='python'
|
||||||
)
|
)
|
||||||
|
|
||||||
# Function to correct p-values per channel
|
# Function to correct p-values per channel
|
||||||
def fdr_correct_per_channel(df):
|
def fdr_correct_per_channel(df):
|
||||||
df = df.copy()
|
df = df.copy()
|
||||||
df['pval_fdr'] = multipletests(df['p_value'], method='fdr_bh')[1]
|
df['pval_fdr'] = multipletests(df['p_value'], method='fdr_bh')[1]
|
||||||
return df
|
return df
|
||||||
|
|
||||||
# Apply FDR correction grouped by channel
|
# Apply FDR correction grouped by channel
|
||||||
corrected = reach_ch_summary.groupby("ch_name", group_keys=False).apply(fdr_correct_per_channel)
|
corrected = activity_ch_summary.groupby("ch_name", group_keys=False).apply(fdr_correct_per_channel)
|
||||||
|
|
||||||
# Determine which channels are significant across any delay
|
# Determine which channels are significant across any delay
|
||||||
sig_channels = (
|
sig_channels = (
|
||||||
corrected.groupby('ch_name')
|
corrected.groupby('ch_name')
|
||||||
.apply(lambda df: (df['pval_fdr'] < 0.05).any())
|
.apply(lambda df: (df['pval_fdr'] < 0.05).any())
|
||||||
.reset_index(name='significant')
|
.reset_index(name='significant')
|
||||||
)
|
)
|
||||||
|
|
||||||
# Merge with mean theta (optional for plotting)
|
# Merge with mean theta (optional for plotting)
|
||||||
mean_theta = reach_ch_summary.groupby('ch_name')['theta'].mean().reset_index()
|
mean_theta = activity_ch_summary.groupby('ch_name')['theta'].mean().reset_index()
|
||||||
sig_channels = sig_channels.merge(mean_theta, on='ch_name')
|
sig_channels = sig_channels.merge(mean_theta, on='ch_name')
|
||||||
print(sig_channels)
|
print(sig_channels)
|
||||||
|
|
||||||
|
|
||||||
# For example, take the minimum corrected p-value per channel
|
# For example, take the minimum corrected p-value per channel
|
||||||
summary_pvals = corrected.groupby('ch_name')['pval_fdr'].min().reset_index()
|
summary_pvals = corrected.groupby('ch_name')['pval_fdr'].min().reset_index()
|
||||||
print(summary_pvals)
|
print(summary_pvals)
|
||||||
|
|
||||||
|
|
||||||
def parse_ch_name(ch_name):
|
def parse_ch_name(ch_name):
|
||||||
# Extract numbers after S and D in names like 'S10_D5 hbo'
|
# Extract numbers after S and D in names like 'S10_D5 hbo'
|
||||||
match = re.match(r'S(\d+)_D(\d+)', ch_name)
|
match = re.match(r'S(\d+)_D(\d+)', ch_name)
|
||||||
if match:
|
if match:
|
||||||
return int(match.group(1)), int(match.group(2))
|
return int(match.group(1)), int(match.group(2))
|
||||||
else:
|
else:
|
||||||
return None, None
|
return None, None
|
||||||
|
|
||||||
|
|
||||||
min_pvals = corrected.groupby('ch_name')['pval_fdr'].min().reset_index()
|
min_pvals = corrected.groupby('ch_name')['pval_fdr'].min().reset_index()
|
||||||
|
|
||||||
# Merge the real p-values into sig_channels / avg_df
|
# Merge the real p-values into sig_channels / avg_df
|
||||||
avg_df = sig_channels.merge(min_pvals, on='ch_name')
|
avg_df = sig_channels.merge(min_pvals, on='ch_name')
|
||||||
|
|
||||||
# Rename columns for consistency
|
# Rename columns for consistency
|
||||||
avg_df = avg_df.rename(columns={'theta': 't_or_theta', 'pval_fdr': 'p_value'})
|
avg_df = avg_df.rename(columns={'theta': 't_or_theta', 'pval_fdr': 'p_value'})
|
||||||
|
|
||||||
# Add Source and Detector columns again
|
# Add Source and Detector columns again
|
||||||
avg_df['Source'], avg_df['Detector'] = zip(*avg_df['ch_name'].map(parse_ch_name))
|
avg_df['Source'], avg_df['Detector'] = zip(*avg_df['ch_name'].map(parse_ch_name))
|
||||||
|
|
||||||
# Keep relevant columns
|
# Keep relevant columns
|
||||||
avg_df = avg_df[['Source', 'Detector', 't_or_theta', 'p_value']].dropna()
|
avg_df = avg_df[['Source', 'Detector', 't_or_theta', 'p_value']].dropna()
|
||||||
|
|
||||||
ABS_SIGNIFICANCE_THETA_VALUE = 1
|
ABS_SIGNIFICANCE_THETA_VALUE = 1
|
||||||
ABS_SIGNIFICANCE_T_VALUE = 1
|
ABS_SIGNIFICANCE_T_VALUE = 1
|
||||||
P_THRESHOLD = 0.05
|
P_THRESHOLD = 0.05
|
||||||
SOURCE_DETECTOR_SEPARATOR = "_"
|
SOURCE_DETECTOR_SEPARATOR = "_"
|
||||||
Reach = "Reach"
|
|
||||||
|
|
||||||
|
t_or_theta = 'theta'
|
||||||
|
for _, row in avg_df.iterrows(): # type: ignore
|
||||||
|
print(f"Source {row['Source']} <-> Detector {row['Detector']}: "
|
||||||
|
f"Avg {t_or_theta}-value = {row['t_or_theta']:.3f}, Avg p-value = {row['p_value']:.3f}")
|
||||||
|
|
||||||
t_or_theta = 'theta'
|
# Extract the cource and detector positions from raw
|
||||||
for _, row in avg_df.iterrows(): # type: ignore
|
src_pos: dict[int, tuple[float, float]] = {}
|
||||||
print(f"Source {row['Source']} <-> Detector {row['Detector']}: "
|
det_pos: dict[int, tuple[float, float]] = {}
|
||||||
f"Avg {t_or_theta}-value = {row['t_or_theta']:.3f}, Avg p-value = {row['p_value']:.3f}")
|
for ch in getattr(raw_haemo, "info")["chs"]:
|
||||||
|
ch_name = ch['ch_name']
|
||||||
|
if not ch_name or not ch['loc'].any():
|
||||||
|
continue
|
||||||
|
parts = ch_name.split()[0]
|
||||||
|
src_str, det_str = parts.split(SOURCE_DETECTOR_SEPARATOR)
|
||||||
|
src_num = int(src_str[1:])
|
||||||
|
det_num = int(det_str[1:])
|
||||||
|
src_pos[src_num] = ch['loc'][3:5]
|
||||||
|
det_pos[det_num] = ch['loc'][6:8]
|
||||||
|
|
||||||
# Extract the cource and detector positions from raw
|
# Set up the plot
|
||||||
src_pos: dict[int, tuple[float, float]] = {}
|
fig, ax = plt.subplots(figsize=(8, 6)) # type: ignore
|
||||||
det_pos: dict[int, tuple[float, float]] = {}
|
|
||||||
for ch in getattr(raw_haemo, "info")["chs"]:
|
|
||||||
ch_name = ch['ch_name']
|
|
||||||
if not ch_name or not ch['loc'].any():
|
|
||||||
continue
|
|
||||||
parts = ch_name.split()[0]
|
|
||||||
src_str, det_str = parts.split(SOURCE_DETECTOR_SEPARATOR)
|
|
||||||
src_num = int(src_str[1:])
|
|
||||||
det_num = int(det_str[1:])
|
|
||||||
src_pos[src_num] = ch['loc'][3:5]
|
|
||||||
det_pos[det_num] = ch['loc'][6:8]
|
|
||||||
|
|
||||||
# Set up the plot
|
# Plot the sources
|
||||||
fig, ax = plt.subplots(figsize=(8, 6)) # type: ignore
|
for pos in src_pos.values():
|
||||||
|
ax.scatter(pos[0], pos[1], s=120, c='k', marker='o', edgecolors='white', linewidths=1, zorder=3) # type: ignore
|
||||||
|
|
||||||
# Plot the sources
|
# Plot the detectors
|
||||||
for pos in src_pos.values():
|
for pos in det_pos.values():
|
||||||
ax.scatter(pos[0], pos[1], s=120, c='k', marker='o', edgecolors='white', linewidths=1, zorder=3) # type: ignore
|
ax.scatter(pos[0], pos[1], s=120, c='k', marker='s', edgecolors='white', linewidths=1, zorder=3) # type: ignore
|
||||||
|
|
||||||
# Plot the detectors
|
# Ensure that the colors stay within the boundaries even if they are over or under the max/min values
|
||||||
for pos in det_pos.values():
|
if t_or_theta == 't':
|
||||||
ax.scatter(pos[0], pos[1], s=120, c='k', marker='s', edgecolors='white', linewidths=1, zorder=3) # type: ignore
|
norm = mcolors.Normalize(vmin=-ABS_SIGNIFICANCE_T_VALUE, vmax=ABS_SIGNIFICANCE_T_VALUE)
|
||||||
|
elif t_or_theta == 'theta':
|
||||||
|
norm = mcolors.Normalize(vmin=-ABS_SIGNIFICANCE_THETA_VALUE, vmax=ABS_SIGNIFICANCE_THETA_VALUE)
|
||||||
|
|
||||||
# Ensure that the colors stay within the boundaries even if they are over or under the max/min values
|
cmap: mcolors.Colormap = plt.get_cmap('seismic')
|
||||||
if t_or_theta == 't':
|
|
||||||
norm = mcolors.Normalize(vmin=-ABS_SIGNIFICANCE_T_VALUE, vmax=ABS_SIGNIFICANCE_T_VALUE)
|
|
||||||
elif t_or_theta == 'theta':
|
|
||||||
norm = mcolors.Normalize(vmin=-ABS_SIGNIFICANCE_THETA_VALUE, vmax=ABS_SIGNIFICANCE_THETA_VALUE)
|
|
||||||
|
|
||||||
cmap: mcolors.Colormap = plt.get_cmap('seismic')
|
# Plot connections with avg t-values
|
||||||
|
for row in avg_df.itertuples():
|
||||||
|
src: int = cast(int, row.Source) # type: ignore
|
||||||
|
det: int = cast(int, row.Detector) # type: ignore
|
||||||
|
tval: float = cast(float, row.t_or_theta) # type: ignore
|
||||||
|
pval: float = cast(float, row.p_value) # type: ignore
|
||||||
|
|
||||||
|
|
||||||
# Plot connections with avg t-values
|
if src in src_pos and det in det_pos:
|
||||||
for row in avg_df.itertuples():
|
x = [src_pos[src][0], det_pos[det][0]]
|
||||||
src: int = cast(int, row.Source) # type: ignore
|
y = [src_pos[src][1], det_pos[det][1]]
|
||||||
det: int = cast(int, row.Detector) # type: ignore
|
style = '-' if pval <= P_THRESHOLD else '--'
|
||||||
tval: float = cast(float, row.t_or_theta) # type: ignore
|
ax.plot(x, y, linestyle=style, color=cmap(norm(tval)), linewidth=4, alpha=0.9, zorder=2) # type: ignore
|
||||||
pval: float = cast(float, row.p_value) # type: ignore
|
|
||||||
|
# Format the Colorbar
|
||||||
|
sm = plt.cm.ScalarMappable(cmap=cmap, norm=norm)
|
||||||
|
sm.set_array([])
|
||||||
|
cbar = plt.colorbar(sm, ax=ax, shrink=0.85) # type: ignore
|
||||||
|
cbar.set_label(f'Average {cond} {t_or_theta} value (hbo)', fontsize=11) # type: ignore
|
||||||
|
|
||||||
|
# Formatting the subplots
|
||||||
|
ax.set_aspect('equal')
|
||||||
|
ax.set_title(f"Average {t_or_theta} values for {cond} (HbO)", fontsize=14) # type: ignore
|
||||||
|
ax.set_xlabel('X position (m)', fontsize=11) # type: ignore
|
||||||
|
ax.set_ylabel('Y position (m)', fontsize=11) # type: ignore
|
||||||
|
ax.grid(True, alpha=0.3) # type: ignore
|
||||||
|
|
||||||
|
# Set axis limits to be 1cm more than the optode positions
|
||||||
|
all_x = [pos[0] for pos in src_pos.values()] + [pos[0] for pos in det_pos.values()]
|
||||||
|
all_y = [pos[1] for pos in src_pos.values()] + [pos[1] for pos in det_pos.values()]
|
||||||
|
ax.set_xlim(min(all_x)-0.01, max(all_x)+0.01)
|
||||||
|
ax.set_ylim(min(all_y)-0.01, max(all_y)+0.01)
|
||||||
|
|
||||||
|
fig.tight_layout()
|
||||||
|
|
||||||
|
fig_individual_significances.append((f"Condition {cond}", fig))
|
||||||
|
|
||||||
if src in src_pos and det in det_pos:
|
return fig_individual_significances
|
||||||
x = [src_pos[src][0], det_pos[det][0]]
|
|
||||||
y = [src_pos[src][1], det_pos[det][1]]
|
|
||||||
style = '-' if pval <= P_THRESHOLD else '--'
|
|
||||||
ax.plot(x, y, linestyle=style, color=cmap(norm(tval)), linewidth=4, alpha=0.9, zorder=2) # type: ignore
|
|
||||||
|
|
||||||
# Format the Colorbar
|
|
||||||
sm = plt.cm.ScalarMappable(cmap=cmap, norm=norm)
|
|
||||||
sm.set_array([])
|
|
||||||
cbar = plt.colorbar(sm, ax=ax, shrink=0.85) # type: ignore
|
|
||||||
cbar.set_label(f'Average {Reach} {t_or_theta} value (hbo)', fontsize=11) # type: ignore
|
|
||||||
|
|
||||||
# Formatting the subplots
|
|
||||||
ax.set_aspect('equal')
|
|
||||||
ax.set_title(f"Average {t_or_theta} values for {Reach} (HbO)", fontsize=14) # type: ignore
|
|
||||||
ax.set_xlabel('X position (m)', fontsize=11) # type: ignore
|
|
||||||
ax.set_ylabel('Y position (m)', fontsize=11) # type: ignore
|
|
||||||
ax.grid(True, alpha=0.3) # type: ignore
|
|
||||||
|
|
||||||
# Set axis limits to be 1cm more than the optode positions
|
|
||||||
all_x = [pos[0] for pos in src_pos.values()] + [pos[0] for pos in det_pos.values()]
|
|
||||||
all_y = [pos[1] for pos in src_pos.values()] + [pos[1] for pos in det_pos.values()]
|
|
||||||
ax.set_xlim(min(all_x)-0.01, max(all_x)+0.01)
|
|
||||||
ax.set_ylim(min(all_y)-0.01, max(all_y)+0.01)
|
|
||||||
|
|
||||||
fig.tight_layout()
|
|
||||||
|
|
||||||
|
|
||||||
return fig
|
|
||||||
|
|
||||||
# TODO: Hardcoded
|
# TODO: Hardcoded
|
||||||
def group_significance(
|
def group_significance(
|
||||||
@@ -1761,7 +1817,7 @@ def group_significance(
|
|||||||
Args:
|
Args:
|
||||||
raw_haemo: Raw haemoglobin MNE object (used for optode positions)
|
raw_haemo: Raw haemoglobin MNE object (used for optode positions)
|
||||||
all_cha: DataFrame with columns including 'ID', 'Condition', 'p_value', 'theta', 'df', 'ch_name', 'Chroma'
|
all_cha: DataFrame with columns including 'ID', 'Condition', 'p_value', 'theta', 'df', 'ch_name', 'Chroma'
|
||||||
condition: condition prefix, e.g., 'Reach'
|
condition: condition prefix, e.g., 'Activity'
|
||||||
correction: p-value correction method ('fdr_bh' or 'bonferroni')
|
correction: p-value correction method ('fdr_bh' or 'bonferroni')
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
@@ -1919,7 +1975,12 @@ def group_significance(
|
|||||||
|
|
||||||
def plot_glm_results(file_path, raw_haemo, glm_est, design_matrix):
|
def plot_glm_results(file_path, raw_haemo, glm_est, design_matrix):
|
||||||
|
|
||||||
|
fig_glms = [] # List to store figures
|
||||||
|
|
||||||
dm = design_matrix.copy()
|
dm = design_matrix.copy()
|
||||||
|
logger.info(design_matrix.shape)
|
||||||
|
logger.info(design_matrix.columns)
|
||||||
|
logger.info(design_matrix.head())
|
||||||
|
|
||||||
rois = dict(AllChannels=range(len(raw_haemo.ch_names)))
|
rois = dict(AllChannels=range(len(raw_haemo.ch_names)))
|
||||||
conditions = design_matrix.columns
|
conditions = design_matrix.columns
|
||||||
@@ -1928,72 +1989,83 @@ def plot_glm_results(file_path, raw_haemo, glm_est, design_matrix):
|
|||||||
df_individual["ID"] = file_path
|
df_individual["ID"] = file_path
|
||||||
# df_individual["theta"] = [t * 1.0e6 for t in df_individual["theta"]]
|
# df_individual["theta"] = [t * 1.0e6 for t in df_individual["theta"]]
|
||||||
|
|
||||||
condition_of_interest="Reach"
|
first_onset_for_cond = {}
|
||||||
|
for onset, desc in zip(raw_haemo.annotations.onset, raw_haemo.annotations.description):
|
||||||
|
if desc not in first_onset_for_cond:
|
||||||
|
first_onset_for_cond[desc] = onset
|
||||||
|
|
||||||
# Filter for the condition of interest and FIR delays
|
# Get unique condition names from annotations (descriptions)
|
||||||
df_individual["isCondition"] = [condition_of_interest in n for n in df_individual["Condition"]]
|
unique_annotations = set(raw_haemo.annotations.description)
|
||||||
df_individual["isDelay"] = ["delay" in n for n in df_individual["Condition"]]
|
|
||||||
df_individual = df_individual.query("isDelay and isCondition")
|
for cond in unique_annotations:
|
||||||
|
logger.info(cond)
|
||||||
# Remove other conditions from design matrix
|
df_individual_filtered = df_individual.copy()
|
||||||
dm_condition_cols = [col for col in dm.columns if condition_of_interest in col]
|
|
||||||
dm_cond = dm[dm_condition_cols]
|
# Filter for the condition of interest and FIR delays
|
||||||
|
df_individual_filtered["isCondition"] = [cond in n for n in df_individual_filtered["Condition"]]
|
||||||
|
df_individual_filtered["isDelay"] = ["delay" in n for n in df_individual_filtered["Condition"]]
|
||||||
|
df_individual_filtered = df_individual_filtered.query("isDelay and isCondition")
|
||||||
|
|
||||||
# Add a numeric delay column
|
# Remove other conditions from design matrix
|
||||||
def extract_delay_number(condition_str):
|
dm_condition_cols = [col for col in dm.columns if cond in col]
|
||||||
# Extracts the number at the end of a string like 'Reach_delay_5'
|
dm_cond = dm[dm_condition_cols]
|
||||||
return int(condition_str.split("_")[-1])
|
|
||||||
|
# Add a numeric delay column
|
||||||
|
def extract_delay_number(condition_str):
|
||||||
|
# Extracts the number at the end of a string like 'Activity_delay_5'
|
||||||
|
return int(condition_str.split("_")[-1])
|
||||||
|
|
||||||
df_individual["DelayNum"] = df_individual["Condition"].apply(extract_delay_number)
|
df_individual_filtered["DelayNum"] = df_individual_filtered["Condition"].apply(extract_delay_number)
|
||||||
|
|
||||||
# Now separate and sort using numeric delay
|
# Now separate and sort using numeric delay
|
||||||
df_hbo = df_individual[df_individual["Chroma"] == "hbo"].sort_values("DelayNum")
|
df_hbo = df_individual_filtered[df_individual_filtered["Chroma"] == "hbo"].sort_values("DelayNum")
|
||||||
df_hbr = df_individual[df_individual["Chroma"] == "hbr"].sort_values("DelayNum")
|
df_hbr = df_individual_filtered[df_individual_filtered["Chroma"] == "hbr"].sort_values("DelayNum")
|
||||||
|
|
||||||
vals_hbo = df_hbo["theta"].values
|
vals_hbo = df_hbo["theta"].values
|
||||||
vals_hbr = df_hbr["theta"].values
|
vals_hbr = df_hbr["theta"].values
|
||||||
|
|
||||||
# Create the plot
|
# Create the plot
|
||||||
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(19, 10))
|
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(19, 10))
|
||||||
|
|
||||||
# Scale design matrix components using numpy arrays instead of pandas operations
|
# Scale design matrix components using numpy arrays instead of pandas operations
|
||||||
dm_cond_values = dm_cond.values
|
dm_cond_values = dm_cond.values
|
||||||
dm_cond_scaled_hbo = dm_cond_values * vals_hbo.reshape(1, -1)
|
dm_cond_scaled_hbo = dm_cond_values * vals_hbo.reshape(1, -1)
|
||||||
dm_cond_scaled_hbr = dm_cond_values * vals_hbr.reshape(1, -1)
|
dm_cond_scaled_hbr = dm_cond_values * vals_hbr.reshape(1, -1)
|
||||||
|
|
||||||
# Create time axis relative to stimulus onset
|
# Create time axis relative to stimulus onset
|
||||||
time = dm_cond.index - np.ceil(raw_haemo.annotations.onset[1])
|
time = dm_cond.index - np.ceil(first_onset_for_cond.get(cond, 0))
|
||||||
|
|
||||||
# Plot
|
# Plot
|
||||||
axes[0].plot(time, dm_cond_values)
|
axes[0].plot(time, dm_cond_values)
|
||||||
axes[1].plot(time, dm_cond_scaled_hbo)
|
axes[1].plot(time, dm_cond_scaled_hbo)
|
||||||
axes[2].plot(time, np.sum(dm_cond_scaled_hbo, axis=1), 'r')
|
axes[2].plot(time, np.sum(dm_cond_scaled_hbo, axis=1), 'r')
|
||||||
axes[2].plot(time, np.sum(dm_cond_scaled_hbr, axis=1), 'b')
|
axes[2].plot(time, np.sum(dm_cond_scaled_hbr, axis=1), 'b')
|
||||||
|
|
||||||
# Format plots
|
# Format plots
|
||||||
for ax in range(3):
|
for ax in range(3):
|
||||||
axes[ax].set_xlim(-5, 25)
|
axes[ax].set_xlim(-5, 25)
|
||||||
axes[ax].set_xlabel("Time (s)")
|
axes[ax].set_xlabel("Time (s)")
|
||||||
axes[0].set_ylim(-0.2, 1.2)
|
axes[0].set_ylim(-0.2, 1.2)
|
||||||
axes[1].set_ylim(-0.5, 1)
|
axes[1].set_ylim(-0.5, 1)
|
||||||
axes[2].set_ylim(-0.5, 1)
|
axes[2].set_ylim(-0.5, 1)
|
||||||
axes[0].set_title(f"FIR Model (Unscaled)")
|
axes[0].set_title(f"FIR Model (Unscaled)")
|
||||||
axes[1].set_title(f"FIR Components (Scaled by {condition_of_interest} GLM Estimates)")
|
axes[1].set_title(f"FIR Components (Scaled by {cond} GLM Estimates)")
|
||||||
axes[2].set_title(f"Evoked Response ({condition_of_interest})")
|
axes[2].set_title(f"Evoked Response ({cond})")
|
||||||
axes[0].set_ylabel("FIR Model")
|
axes[0].set_ylabel("FIR Model")
|
||||||
axes[1].set_ylabel("Oxyhaemoglobin (ΔμMol)")
|
axes[1].set_ylabel("Oxyhaemoglobin (ΔμMol)")
|
||||||
axes[2].set_ylabel("Haemoglobin (ΔμMol)")
|
axes[2].set_ylabel("Haemoglobin (ΔμMol)")
|
||||||
axes[2].legend(["Oxyhaemoglobin", "Deoxyhaemoglobin"])
|
axes[2].legend(["Oxyhaemoglobin", "Deoxyhaemoglobin"])
|
||||||
|
|
||||||
|
|
||||||
print(f"Number of FIR bins: {len(vals_hbo)}")
|
print(f"Number of FIR bins: {len(vals_hbo)}")
|
||||||
print(f"Mean theta (HbO): {np.mean(vals_hbo):.4f}")
|
print(f"Mean theta (HbO): {np.mean(vals_hbo):.4f}")
|
||||||
print(f"Sum of theta (HbO): {np.sum(vals_hbo):.4f}")
|
print(f"Sum of theta (HbO): {np.sum(vals_hbo):.4f}")
|
||||||
print(f"Mean theta (HbR): {np.mean(vals_hbr):.4f}")
|
print(f"Mean theta (HbR): {np.mean(vals_hbr):.4f}")
|
||||||
print(f"Sum of theta (HbR): {np.sum(vals_hbr):.4f}")
|
print(f"Sum of theta (HbR): {np.sum(vals_hbr):.4f}")
|
||||||
|
|
||||||
return fig
|
fig_glms.append((f"Condition {cond}", fig))
|
||||||
|
|
||||||
|
return fig_glms
|
||||||
|
|
||||||
|
|
||||||
def plot_3d_evoked_array(
|
def plot_3d_evoked_array(
|
||||||
@@ -2756,7 +2828,7 @@ def calculate_dpf(file_path):
|
|||||||
# order is hbo / hbr
|
# order is hbo / hbr
|
||||||
with h5py.File(file_path, 'r') as f:
|
with h5py.File(file_path, 'r') as f:
|
||||||
wavelengths = f['/nirs/probe/wavelengths'][:]
|
wavelengths = f['/nirs/probe/wavelengths'][:]
|
||||||
logger.info("Wavelengths (nm):", wavelengths)
|
logger.info(f"Wavelengths (nm): {wavelengths}")
|
||||||
wavelengths = sorted(wavelengths, reverse=True)
|
wavelengths = sorted(wavelengths, reverse=True)
|
||||||
age = float(AGE)
|
age = float(AGE)
|
||||||
logger.info(f"Their age was {AGE}")
|
logger.info(f"Their age was {AGE}")
|
||||||
@@ -2871,9 +2943,12 @@ def process_participant(file_path, progress_callback=None):
|
|||||||
logger.info("11")
|
logger.info("11")
|
||||||
|
|
||||||
# Step 11: Get short / long channels
|
# Step 11: Get short / long channels
|
||||||
short_chans = get_short_channels(raw_haemo, max_dist=0.015)
|
if SHORT_CHANNEL:
|
||||||
fig_short_chans = short_chans.plot(duration=raw_haemo.times[-1], n_channels=raw_haemo.info['nchan'], title="Short Channels Only", show=False)
|
short_chans = get_short_channels(raw_haemo, max_dist=0.015)
|
||||||
fig_individual["short"] = fig_short_chans
|
fig_short_chans = short_chans.plot(duration=raw_haemo.times[-1], n_channels=raw_haemo.info['nchan'], title="Short Channels Only", show=False)
|
||||||
|
fig_individual["short"] = fig_short_chans
|
||||||
|
else:
|
||||||
|
short_chans = None
|
||||||
raw_haemo = get_long_channels(raw_haemo)
|
raw_haemo = get_long_channels(raw_haemo)
|
||||||
if progress_callback: progress_callback(12)
|
if progress_callback: progress_callback(12)
|
||||||
logger.info("12")
|
logger.info("12")
|
||||||
@@ -2893,6 +2968,19 @@ def process_participant(file_path, progress_callback=None):
|
|||||||
logger.info("14")
|
logger.info("14")
|
||||||
|
|
||||||
# Step 14: Design Matrix
|
# Step 14: Design Matrix
|
||||||
|
events_to_remove = REMOVE_EVENTS
|
||||||
|
|
||||||
|
filtered_annotations = [ann for ann in raw.annotations if ann['description'] not in events_to_remove]
|
||||||
|
|
||||||
|
new_annot = Annotations(
|
||||||
|
onset=[ann['onset'] for ann in filtered_annotations],
|
||||||
|
duration=[ann['duration'] for ann in filtered_annotations],
|
||||||
|
description=[ann['description'] for ann in filtered_annotations]
|
||||||
|
)
|
||||||
|
|
||||||
|
# Set the new annotations
|
||||||
|
raw_haemo.set_annotations(new_annot)
|
||||||
|
|
||||||
design_matrix, fig_design_matrix = make_design_matrix(raw_haemo, short_chans)
|
design_matrix, fig_design_matrix = make_design_matrix(raw_haemo, short_chans)
|
||||||
fig_individual["Design Matrix"] = fig_design_matrix
|
fig_individual["Design Matrix"] = fig_design_matrix
|
||||||
if progress_callback: progress_callback(15)
|
if progress_callback: progress_callback(15)
|
||||||
@@ -2916,13 +3004,15 @@ def process_participant(file_path, progress_callback=None):
|
|||||||
|
|
||||||
# Step 16: Plot GLM results
|
# Step 16: Plot GLM results
|
||||||
fig_glm_result = plot_glm_results(file_path, raw_haemo, glm_est, design_matrix)
|
fig_glm_result = plot_glm_results(file_path, raw_haemo, glm_est, design_matrix)
|
||||||
fig_individual["GLM"] = fig_glm_result
|
for name, fig in fig_glm_result:
|
||||||
|
fig_individual[f"GLM {name}"] = fig
|
||||||
if progress_callback: progress_callback(17)
|
if progress_callback: progress_callback(17)
|
||||||
logger.info("17")
|
logger.info("17")
|
||||||
|
|
||||||
# Step 17: Plot channel significance
|
# Step 17: Plot channel significance
|
||||||
fig_significance = individual_significance(raw_haemo, glm_est)
|
fig_significance = individual_significance(raw_haemo, glm_est)
|
||||||
fig_individual["Significance"] = fig_significance
|
for name, fig in fig_significance:
|
||||||
|
fig_individual[f"Significance {name}"] = fig
|
||||||
if progress_callback: progress_callback(18)
|
if progress_callback: progress_callback(18)
|
||||||
logger.info("18")
|
logger.info("18")
|
||||||
|
|
||||||
@@ -2964,7 +3054,11 @@ def process_participant(file_path, progress_callback=None):
|
|||||||
contrast_dict = {}
|
contrast_dict = {}
|
||||||
|
|
||||||
for condition in all_conditions:
|
for condition in all_conditions:
|
||||||
delay_cols = [col for col in all_delay_cols if col.startswith(f"{condition}_delay_")]
|
delay_cols = [
|
||||||
|
col for col in all_delay_cols
|
||||||
|
if col.startswith(f"{condition}_delay_") and
|
||||||
|
TIME_WINDOW_START <= int(col.split("_delay_")[-1]) <= TIME_WINDOW_END
|
||||||
|
]
|
||||||
|
|
||||||
if not delay_cols:
|
if not delay_cols:
|
||||||
continue # skip if no columns found (shouldn't happen?)
|
continue # skip if no columns found (shouldn't happen?)
|
||||||
@@ -2975,6 +3069,9 @@ def process_participant(file_path, progress_callback=None):
|
|||||||
|
|
||||||
contrast_dict[condition] = contrast_vector
|
contrast_dict[condition] = contrast_vector
|
||||||
|
|
||||||
|
if progress_callback: progress_callback(19)
|
||||||
|
logger.info("19")
|
||||||
|
|
||||||
# Compute contrast results
|
# Compute contrast results
|
||||||
contrast_results = {}
|
contrast_results = {}
|
||||||
|
|
||||||
@@ -2988,7 +3085,18 @@ def process_participant(file_path, progress_callback=None):
|
|||||||
|
|
||||||
fig_bytes = convert_fig_dict_to_png_bytes(fig_individual)
|
fig_bytes = convert_fig_dict_to_png_bytes(fig_individual)
|
||||||
|
|
||||||
|
if progress_callback: progress_callback(20)
|
||||||
|
logger.info("20")
|
||||||
|
|
||||||
|
sanitize_paths_for_pickle(raw_haemo, epochs)
|
||||||
|
|
||||||
return raw_haemo, epochs, fig_bytes, cha, contrast_results, df_ind, design_matrix, AGE, GENDER, GROUP, True
|
return raw_haemo, epochs, fig_bytes, cha, contrast_results, df_ind, design_matrix, AGE, GENDER, GROUP, True
|
||||||
|
|
||||||
# Not 3000 lines yay!
|
def sanitize_paths_for_pickle(raw_haemo, epochs):
|
||||||
|
# Fix raw_haemo._filenames
|
||||||
|
if hasattr(raw_haemo, '_filenames'):
|
||||||
|
raw_haemo._filenames = [str(p) for p in raw_haemo._filenames]
|
||||||
|
|
||||||
|
# Fix epochs._raw._filenames
|
||||||
|
if hasattr(epochs, '_raw') and hasattr(epochs._raw, '_filenames'):
|
||||||
|
epochs._raw._filenames = [str(p) for p in epochs._raw._filenames]
|
||||||
1
icons/folder_eye_24dp_1F1F1F.svg
Normal file
1
icons/folder_eye_24dp_1F1F1F.svg
Normal file
@@ -0,0 +1 @@
|
|||||||
|
<svg xmlns="http://www.w3.org/2000/svg" height="24px" viewBox="0 -960 960 960" width="24px" fill="#1f1f1f"><path d="M160-160q-33 0-56.5-23.5T80-240v-480q0-33 23.5-56.5T160-800h240l80 80h320q33 0 56.5 23.5T880-640v242q-18-14-38-23t-42-19v-200H447l-80-80H160v480h120v80H160ZM640-40q-91 0-168-48T360-220q35-84 112-132t168-48q91 0 168 48t112 132q-35 84-112 132T640-40Zm0-80q57 0 107.5-26t82.5-74q-32-48-82.5-74T640-320q-57 0-107.5 26T450-220q32 48 82.5 74T640-120Zm0-40q-25 0-42.5-17.5T580-220q0-25 17.5-42.5T640-280q25 0 42.5 17.5T700-220q0 25-17.5 42.5T640-160Zm-480-80v-480 277-37 240Z"/></svg>
|
||||||
|
After Width: | Height: | Size: 593 B |
1
icons/remove_24dp_1F1F1F.svg
Normal file
1
icons/remove_24dp_1F1F1F.svg
Normal file
@@ -0,0 +1 @@
|
|||||||
|
<svg xmlns="http://www.w3.org/2000/svg" height="24px" viewBox="0 -960 960 960" width="24px" fill="#1f1f1f"><path d="M200-440v-80h560v80H200Z"/></svg>
|
||||||
|
After Width: | Height: | Size: 149 B |
1
icons/terminal_24dp_1F1F1F.svg
Normal file
1
icons/terminal_24dp_1F1F1F.svg
Normal file
@@ -0,0 +1 @@
|
|||||||
|
<svg xmlns="http://www.w3.org/2000/svg" height="24px" viewBox="0 -960 960 960" width="24px" fill="#1f1f1f"><path d="M160-160q-33 0-56.5-23.5T80-240v-480q0-33 23.5-56.5T160-800h640q33 0 56.5 23.5T880-720v480q0 33-23.5 56.5T800-160H160Zm0-80h640v-400H160v400Zm140-40-56-56 103-104-104-104 57-56 160 160-160 160Zm180 0v-80h240v80H480Z"/></svg>
|
||||||
|
After Width: | Height: | Size: 340 B |
1
icons/upgrade_24dp_1F1F1F.svg
Normal file
1
icons/upgrade_24dp_1F1F1F.svg
Normal file
@@ -0,0 +1 @@
|
|||||||
|
<svg xmlns="http://www.w3.org/2000/svg" height="24px" viewBox="0 -960 960 960" width="24px" fill="#1f1f1f"><path d="M280-160v-80h400v80H280Zm160-160v-327L336-544l-56-56 200-200 200 200-56 56-104-103v327h-80Z"/></svg>
|
||||||
|
After Width: | Height: | Size: 216 B |
Reference in New Issue
Block a user