12 Commits

Author SHA1 Message Date
953ea90c67 fix to bandpass filter 2025-10-20 16:07:18 -07:00
20b255321b improvements 2025-10-20 09:33:50 -07:00
b5afcec37d fixes to cross platform saves 2025-10-15 17:09:51 -07:00
5361f6ea21 changed the changelog 2025-10-15 16:12:51 -07:00
ee023c26c1 changelog fix 2025-10-15 16:10:48 -07:00
06c9ff0ecf update changelog 2025-10-15 15:59:26 -07:00
542dd85a78 general fixes 2025-10-15 15:51:02 -07:00
3e0f70ea49 fixes to build version 1.1.3 2025-10-15 12:59:24 -07:00
d6c71e0ab2 changelog fixes and further updates to cancel running process 2025-10-15 10:48:07 -07:00
87073fb218 more boris implementation 2025-10-15 10:00:44 -07:00
3d0fbd5c5e fix to boris events 2025-10-03 16:58:49 -07:00
3f38f5a978 updates for boris support 2025-09-26 14:01:32 -07:00
7 changed files with 1916 additions and 147 deletions

View File

@@ -1,3 +1,52 @@
# Next Release
- Fixed Windows saves not being able to be opened by a Mac (hopefully the other way too!)
- Added the option to right click loaded snirf files to reveal them in a file browser or delete them if they are no longer desired
- Changed the way folders are opened to store the files seperately rather than the folder as a whole to allow for the removal of files
- Fixed issues with dropdowns and bubbles not populating correctly when opening a single file and temporarily removed the option to open multiple folders
- Improved crash handling and the message that is displayed to the user if the application crashes
- Progress bar will now colour the stage that fails as red if a file fails during processing
- A warning message will be displayed when a file fails to process with information on what went wrong. This message does not halt the rest of the processing of the other files
- Fixed the number of rectangles in the progress bar to 20 (was incorrect in v1.1.1)
- Added validation to ensure loaded files do not have 2 dimensional data when clicking process to prevent inaccurate results from being generated
- Added more metadata information to the top left information panel
- Changed the Status Bar message when processing is complete to state how many were successful and how many were not
- Added a clickable link below the selected file's metadata explaining the independent parameters and why they are useful
- Updated some tooltips to provide better, more accurate information
- Added details about the processing steps and their order into the user guide
# Version 1.1.4
- Fixed some display text to now display the correct information
- A new option under Analysis has been added to export the data from a specified participant as a csv file. Fixes [Issue 19](https://git.research.dezeeuw.ca/tyler/flares/issues/19), [Issue 27](https://git.research.dezeeuw.ca/tyler/flares/issues/27)
- Added 2 new parameters - TIME_WINDOW_START and TIME_WINDOW_END. Fixes [Issue 29](https://git.research.dezeeuw.ca/tyler/flares/issues/29)
- These parameters affect the visualization of the significance and contrast images but do not change the total time modeled underneath
- Fixed the duration of annotations edited from a BORIS file from 0 seconds to their proper duration
- Added the annotation information to each participant under their "File information" window
- Fixed Macs not being able to save snirfs attempting to be updated from BORIS files, and in general the updated files not respecting the path chosen by the user
# Version 1.1.3
- Added back the ability to use the fOLD dataset. Fixes [Issue 23](https://git.research.dezeeuw.ca/tyler/flares/issues/23)
- 5th option has been added under Analysis to get to fOLD channels per participant
- Added an option to cancel the running process. Fixes [Issue 15](https://git.research.dezeeuw.ca/tyler/flares/issues/15)
- Prevented graph images from showing when participants are being processed. Fixes [Issue 24](https://git.research.dezeeuw.ca/tyler/flares/issues/24)
- Allow the option to remove all events of a type from all loaded snirfs. Fixes [Issue 25](https://git.research.dezeeuw.ca/tyler/flares/issues/25)
- Added new icons in the menu bar
- Added a terminal to interact with the app in a more command-like form
- Currently the terminal has no functionality but some features for batch operations will be coming soon!
- Inter-Group viewer now has the option to visualize the average response on the brain of all participants in the group. Fixes [Issue 26](https://git.research.dezeeuw.ca/tyler/flares/issues/24)
- Fixed the description under "Update events in snirf file..."
# Version 1.1.2
- Fixed incorrect colormaps being applied
- Added functionality to utilize external event markers from a file. Fixes [Issue 6](https://git.research.dezeeuw.ca/tyler/flares/issues/6)
# Version 1.1.1 # Version 1.1.1
- Fixed the number of rectangles in the progress bar to 19 - Fixed the number of rectangles in the progress bar to 19

135
flares.py
View File

@@ -50,12 +50,12 @@ from scipy.spatial.distance import cdist
# Backen visualization needed to be defined for pyinstaller # Backen visualization needed to be defined for pyinstaller
import pyvistaqt # type: ignore import pyvistaqt # type: ignore
import vtkmodules.util.data_model # import vtkmodules.util.data_model
import vtkmodules.util.execution_model # import vtkmodules.util.execution_model
# External library imports for mne # External library imports for mne
from mne import ( from mne import (
EvokedArray, SourceEstimate, Info, Epochs, Label, EvokedArray, SourceEstimate, Info, Epochs, Label, Annotations,
events_from_annotations, read_source_spaces, events_from_annotations, read_source_spaces,
stc_near_sensors, pick_types, grand_average, get_config, set_config, read_labels_from_annot stc_near_sensors, pick_types, grand_average, get_config, set_config, read_labels_from_annot
) # type: ignore ) # type: ignore
@@ -132,6 +132,11 @@ ENHANCE_NEGATIVE_CORRELATION: bool
SHORT_CHANNEL: bool SHORT_CHANNEL: bool
REMOVE_EVENTS: list
TIME_WINDOW_START: int
TIME_WINDOW_END: int
VERBOSITY = True VERBOSITY = True
# FIXME: Shouldn't need each ordering - just order it before checking # FIXME: Shouldn't need each ordering - just order it before checking
@@ -179,6 +184,9 @@ REQUIRED_KEYS: dict[str, Any] = {
"PSP_THRESHOLD": float, "PSP_THRESHOLD": float,
"SHORT_CHANNEL": bool, "SHORT_CHANNEL": bool,
"REMOVE_EVENTS": list,
"TIME_WINDOW_START": int,
"TIME_WINDOW_END": int
# "REJECT_PAIRS": bool, # "REJECT_PAIRS": bool,
# "FORCE_DROP_ANNOTATIONS": list, # "FORCE_DROP_ANNOTATIONS": list,
# "FILTER_LOW_PASS": float, # "FILTER_LOW_PASS": float,
@@ -260,40 +268,42 @@ def set_metadata(file_path, metadata: dict[str, Any]) -> None:
val = file_metadata.get(key, None) val = file_metadata.get(key, None)
if val not in (None, '', [], {}, ()): # check for "empty" values if val not in (None, '', [], {}, ()): # check for "empty" values
globals()[key] = val globals()[key] = val
from queue import Empty # This works with multiprocessing.Manager().Queue()
def gui_entry(config: dict[str, Any], gui_queue: Queue, progress_queue: Queue) -> None: def gui_entry(config: dict[str, Any], gui_queue: Queue, progress_queue: Queue) -> None:
try: def forward_progress():
# Start a thread to forward progress messages back to GUI while True:
def forward_progress(): try:
while True: msg = progress_queue.get(timeout=1)
try: if msg == "__done__":
msg = progress_queue.get(timeout=1) break
if msg == "__done__": gui_queue.put(msg)
break except Empty:
gui_queue.put(msg) continue
except: except Exception as e:
continue gui_queue.put({
"type": "error",
"error": f"Forwarding thread crashed: {e}",
"traceback": traceback.format_exc()
})
break
t = threading.Thread(target=forward_progress, daemon=True) t = threading.Thread(target=forward_progress, daemon=True)
t.start() t.start()
try:
file_paths = config['SNIRF_FILES'] file_paths = config['SNIRF_FILES']
file_params = config['PARAMS'] file_params = config['PARAMS']
file_metadata = config['METADATA'] file_metadata = config['METADATA']
max_workers = file_params.get("MAX_WORKERS", int(os.cpu_count()/4)) max_workers = file_params.get("MAX_WORKERS", int(os.cpu_count()/4))
# Run the actual processing, with progress_queue passed down results = process_multiple_participants(
print("actual call") file_paths, file_params, file_metadata, progress_queue, max_workers
results = process_multiple_participants(file_paths, file_params, file_metadata, progress_queue, max_workers) )
# Signal end of progress
progress_queue.put("__done__")
t.join()
gui_queue.put({"success": True, "result": results}) gui_queue.put({"success": True, "result": results})
except Exception as e: except Exception as e:
gui_queue.put({ gui_queue.put({
"success": False, "success": False,
@@ -301,6 +311,14 @@ def gui_entry(config: dict[str, Any], gui_queue: Queue, progress_queue: Queue) -
"traceback": traceback.format_exc() "traceback": traceback.format_exc()
}) })
finally:
# Always send done to the thread and avoid hanging
try:
progress_queue.put("__done__")
except:
pass
t.join(timeout=5) # prevent permanent hang
def process_participant_worker(args): def process_participant_worker(args):
@@ -335,9 +353,16 @@ def process_multiple_participants(file_paths, file_params, file_metadata, progre
try: try:
file_path, result, error = future.result() file_path, result, error = future.result()
if error: if error:
print(f"Error processing {file_path}: {error[0]}") error_message, error_traceback = error
print(error[1]) if progress_queue:
progress_queue.put({
"type": "error",
"file": file_path,
"error": error_message,
"traceback": error_traceback
})
continue continue
results_by_file[file_path] = result results_by_file[file_path] = result
except Exception as e: except Exception as e:
print(f"Unexpected error processing {file_path}: {e}") print(f"Unexpected error processing {file_path}: {e}")
@@ -1045,7 +1070,8 @@ def filter_the_data(raw_haemo):
average=True, xscale="log", color="r", show=False, amplitude=False average=True, xscale="log", color="r", show=False, amplitude=False
) )
raw_haemo = raw_haemo.filter(l_freq=None, h_freq=0.4, h_trans_bandwidth=0.2) #raw_haemo = raw_haemo.filter(l_freq=None, h_freq=0.4, h_trans_bandwidth=0.2)
raw_haemo = raw_haemo.filter(0.05, 0.7, h_trans_bandwidth=0.2, l_trans_bandwidth=0.02)
raw_haemo.compute_psd(fmax=2).plot( raw_haemo.compute_psd(fmax=2).plot(
average=True, xscale="log", axes=fig_filter.axes, color="g", amplitude=False, show=False average=True, xscale="log", axes=fig_filter.axes, color="g", amplitude=False, show=False
@@ -1074,7 +1100,7 @@ def epochs_calculations(raw_haemo, events, event_dict):
# Plot drop log # Plot drop log
# TODO: Why show this if we never use epochs2? # TODO: Why show this if we never use epochs2?
fig_epochs_dropped = epochs2.plot_drop_log() fig_epochs_dropped = epochs2.plot_drop_log(show=False)
fig_epochs.append(("fig_epochs_dropped", fig_epochs_dropped)) fig_epochs.append(("fig_epochs_dropped", fig_epochs_dropped))
# Plot for each condition # Plot for each condition
@@ -1108,7 +1134,7 @@ def epochs_calculations(raw_haemo, events, event_dict):
evokeds3 = [] evokeds3 = []
colors = [] colors = []
conditions = list(epochs.event_id.keys()) conditions = list(epochs.event_id.keys())
cmap = plt.cm.get_cmap("tab10", len(conditions)) cmap = plt.get_cmap("tab10", len(conditions))
for idx, cond in enumerate(conditions): for idx, cond in enumerate(conditions):
evoked = epochs[cond].average(picks="hbo") evoked = epochs[cond].average(picks="hbo")
@@ -1470,9 +1496,15 @@ def resource_path(relative_path):
def fold_channels(raw: BaseRaw) -> None: def fold_channels(raw: BaseRaw) -> None:
# if getattr(sys, 'frozen', False):
path = os.path.expanduser("~") + "/mne_data/fOLD/fOLD-public-master/Supplementary"
logger.info(path)
set_config('MNE_NIRS_FOLD_PATH', resource_path(path)) # type: ignore
# Locate the fOLD excel files # # Locate the fOLD excel files
set_config('MNE_NIRS_FOLD_PATH', resource_path("../../mne_data/fOLD/fOLD-public-master/Supplementary")) # type: ignore # else:
# logger.info("yabba")
# set_config('MNE_NIRS_FOLD_PATH', resource_path("../../mne_data/fOLD/fOLD-public-master/Supplementary")) # type: ignore
output = None output = None
@@ -1534,8 +1566,8 @@ def fold_channels(raw: BaseRaw) -> None:
"Brain_Outside", "Brain_Outside",
] ]
cmap1 = plt.cm.get_cmap('tab20') # First 20 colors cmap1 = plt.get_cmap('tab20') # First 20 colors
cmap2 = plt.cm.get_cmap('tab20b') # Next 20 colors cmap2 = plt.get_cmap('tab20b') # Next 20 colors
# Combine the colors from both colormaps # Combine the colors from both colormaps
colors = [cmap1(i) for i in range(20)] + [cmap2(i) for i in range(20)] # Total 40 colors colors = [cmap1(i) for i in range(20)] + [cmap2(i) for i in range(20)] # Total 40 colors
@@ -1611,6 +1643,7 @@ def fold_channels(raw: BaseRaw) -> None:
for ax in axes[len(hbo_channel_names):]: for ax in axes[len(hbo_channel_names):]:
ax.axis('off') ax.axis('off')
plt.show()
return fig, legend_fig return fig, legend_fig
@@ -2795,7 +2828,7 @@ def calculate_dpf(file_path):
# order is hbo / hbr # order is hbo / hbr
with h5py.File(file_path, 'r') as f: with h5py.File(file_path, 'r') as f:
wavelengths = f['/nirs/probe/wavelengths'][:] wavelengths = f['/nirs/probe/wavelengths'][:]
logger.info("Wavelengths (nm):", wavelengths) logger.info(f"Wavelengths (nm): {wavelengths}")
wavelengths = sorted(wavelengths, reverse=True) wavelengths = sorted(wavelengths, reverse=True)
age = float(AGE) age = float(AGE)
logger.info(f"Their age was {AGE}") logger.info(f"Their age was {AGE}")
@@ -2935,6 +2968,19 @@ def process_participant(file_path, progress_callback=None):
logger.info("14") logger.info("14")
# Step 14: Design Matrix # Step 14: Design Matrix
events_to_remove = REMOVE_EVENTS
filtered_annotations = [ann for ann in raw.annotations if ann['description'] not in events_to_remove]
new_annot = Annotations(
onset=[ann['onset'] for ann in filtered_annotations],
duration=[ann['duration'] for ann in filtered_annotations],
description=[ann['description'] for ann in filtered_annotations]
)
# Set the new annotations
raw_haemo.set_annotations(new_annot)
design_matrix, fig_design_matrix = make_design_matrix(raw_haemo, short_chans) design_matrix, fig_design_matrix = make_design_matrix(raw_haemo, short_chans)
fig_individual["Design Matrix"] = fig_design_matrix fig_individual["Design Matrix"] = fig_design_matrix
if progress_callback: progress_callback(15) if progress_callback: progress_callback(15)
@@ -3008,7 +3054,11 @@ def process_participant(file_path, progress_callback=None):
contrast_dict = {} contrast_dict = {}
for condition in all_conditions: for condition in all_conditions:
delay_cols = [col for col in all_delay_cols if col.startswith(f"{condition}_delay_")] delay_cols = [
col for col in all_delay_cols
if col.startswith(f"{condition}_delay_") and
TIME_WINDOW_START <= int(col.split("_delay_")[-1]) <= TIME_WINDOW_END
]
if not delay_cols: if not delay_cols:
continue # skip if no columns found (shouldn't happen?) continue # skip if no columns found (shouldn't happen?)
@@ -3037,5 +3087,16 @@ def process_participant(file_path, progress_callback=None):
if progress_callback: progress_callback(20) if progress_callback: progress_callback(20)
logger.info("20") logger.info("20")
sanitize_paths_for_pickle(raw_haemo, epochs)
return raw_haemo, epochs, fig_bytes, cha, contrast_results, df_ind, design_matrix, AGE, GENDER, GROUP, True return raw_haemo, epochs, fig_bytes, cha, contrast_results, df_ind, design_matrix, AGE, GENDER, GROUP, True
def sanitize_paths_for_pickle(raw_haemo, epochs):
# Fix raw_haemo._filenames
if hasattr(raw_haemo, '_filenames'):
raw_haemo._filenames = [str(p) for p in raw_haemo._filenames]
# Fix epochs._raw._filenames
if hasattr(epochs, '_raw') and hasattr(epochs._raw, '_filenames'):
epochs._raw._filenames = [str(p) for p in epochs._raw._filenames]

View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" height="24px" viewBox="0 -960 960 960" width="24px" fill="#1f1f1f"><path d="M160-160q-33 0-56.5-23.5T80-240v-480q0-33 23.5-56.5T160-800h240l80 80h320q33 0 56.5 23.5T880-640v242q-18-14-38-23t-42-19v-200H447l-80-80H160v480h120v80H160ZM640-40q-91 0-168-48T360-220q35-84 112-132t168-48q91 0 168 48t112 132q-35 84-112 132T640-40Zm0-80q57 0 107.5-26t82.5-74q-32-48-82.5-74T640-320q-57 0-107.5 26T450-220q32 48 82.5 74T640-120Zm0-40q-25 0-42.5-17.5T580-220q0-25 17.5-42.5T640-280q25 0 42.5 17.5T700-220q0 25-17.5 42.5T640-160Zm-480-80v-480 277-37 240Z"/></svg>

After

Width:  |  Height:  |  Size: 593 B

View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" height="24px" viewBox="0 -960 960 960" width="24px" fill="#1f1f1f"><path d="M200-440v-80h560v80H200Z"/></svg>

After

Width:  |  Height:  |  Size: 149 B

View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" height="24px" viewBox="0 -960 960 960" width="24px" fill="#1f1f1f"><path d="M160-160q-33 0-56.5-23.5T80-240v-480q0-33 23.5-56.5T160-800h640q33 0 56.5 23.5T880-720v480q0 33-23.5 56.5T800-160H160Zm0-80h640v-400H160v400Zm140-40-56-56 103-104-104-104 57-56 160 160-160 160Zm180 0v-80h240v80H480Z"/></svg>

After

Width:  |  Height:  |  Size: 340 B

View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" height="24px" viewBox="0 -960 960 960" width="24px" fill="#1f1f1f"><path d="M280-160v-80h400v80H280Zm160-160v-327L336-544l-56-56 200-200 200 200-56 56-104-103v327h-80Z"/></svg>

After

Width:  |  Height:  |  Size: 216 B

1875
main.py

File diff suppressed because it is too large Load Diff