Utilities¶
Utility modules provide supporting functionality for the spike sorting pipeline.
blech_utils¶
Core utility functions used throughout the codebase.
Key Classes and Functions¶
Tee¶
Redirect stdout to both console and file for logging.
from utils.blech_utils import Tee
import sys
# Redirect output to file and console
sys.stdout = Tee('/path/to/data/dir', name='logfile.txt')
print("This goes to both console and file")
path_handler¶
Handle file paths and directory operations.
from utils.blech_utils import path_handler
# Instantiate path handler
ph = path_handler()
# Access blech_clust directory
blech_dir = ph.blech_clust_dir
# Access home directory
home = ph.home_dir
imp_metadata¶
Import and manage experimental metadata.
from utils.blech_utils import imp_metadata
import sys
# Load metadata (requires sys.argv or list with directory path)
metadata = imp_metadata([sys.argv, '/path/to/data'])
# Access metadata attributes
hdf5_name = metadata.hdf5_name
info_dict = metadata.info_dict
layout_df = metadata.layout
Clustering Utilities¶
Located in utils/clustering/, these modules provide clustering algorithms and tools.
Key Functions¶
- Spike clustering algorithms
- Feature extraction
- Cluster validation
- Merge/split operations
Data Management¶
ephys_data Module¶
Comprehensive data handling for electrophysiology recordings.
See Ephys Data for detailed documentation.
Quality Assurance¶
qa_utils Module¶
Tools for dataset quality assessment and validation.
See QA Tools for detailed documentation.
Helper Scripts¶
infer_rnn_rates.py¶
Infer firing rates from spike trains using RNN.
Options:
--train_steps: Number of training steps--hidden_size: RNN hidden layer size--bin_size: Spike binning size--retrain: Force model retraining
blech_data_summary.py¶
Generate comprehensive dataset summary.
grade_dataset.py¶
Grade dataset quality based on metrics.
Configuration Management¶
Parameter Files¶
Utilities for loading and managing parameter files:
- JSON parameter loading
- Parameter validation
- Default value handling
Example¶
import json
# Load parameters
with open('params/sorting_params.json', 'r') as f:
params = json.load(f)
# Access parameters
max_clusters = params['max_clusters']
min_cluster_size = params['min_cluster_size']
File I/O¶
HDF5 Operations¶
Functions for reading and writing HDF5 files:
import tables
# Open HDF5 file
with tables.open_file('data.h5', 'r') as hf5:
# Read spike times
spike_times = hf5.root.spike_times[:]
# Read unit information
units = hf5.root.units[:]
Binary Data¶
Functions for reading Intan binary data:
- Amplifier data (
.datfiles) - Digital input data (DIN files)
- Auxiliary input data