Discussion & Conclusion
Key insights · Contributions · Next steps
Discussion
Hybrid vs TCN vs SNN Encoding effects Accuracy–efficiency trade-off
  • TCN-only:High accuracy and fast convergence, but it does not exploit event sparsity. On neuroprosthetic hardware, energy validation is required due to the ANN-to-SNN conversion.
  • SNN-only is efficient but fragile: Lower firing rates reduce energy, but accuracy is limited (up to 62.84% in our setting), which restricts practical use.
  • SpikingTCN as a middle ground: Injects LIF dynamics into convolutional blocks, combining temporal sensitivity with sparsity for neuroprosthetic deployment.
  • Hybrid TCN–SNN: top accuracy with sparse spikes; however, the parallel dual-branch design means low firing rate alone does not prove energy savings.
Encoding matters: latency optimal in SNN-only, rate superior in SpikingTCN — highlights interaction between filters and spike timing.
Measured sparsity: average firing rates kept within 1–26%, aligning with neuroprosthetic efficiency assumptions (event-proportional power).
Fairness note for Hybrid: report both dense MACs (TCN branch) and spike/synaptic events (SNN branch); verify claims on hardware or faithful simulators.

Takeaway: Hybrid achieves the best accuracy and a promising accuracy–efficiency balance, but energy advantages must be validated on-device. SpikingTCN remains a lightweight, neuroprosthetic-friendly alternative.

Conclusion
Hybrid = balanced; STCN = neuroprosthetic-balanced
Hybrid combines the stability of TCNs with the sparsity of SNNs and achieved the best accuracy in our study. SpikingTCN shows strong potential as a balanced accuracy–efficiency neuroprosthetic controller.
Future: Real-time & low-power
Hybrid: Confidence gating(invoke the TCN branch only when confidence falls below a threshold) to raise effective-accuracy. SpikingTCN: Apply zero-activity (zero-spike) convolution skipping to avoid unnecessary convolutions and further reduce computation.
Future: hardware validation
Validate on Intel Loihi / SpiNNaker; For fair comparisons, report inference latency, MAC counts for dense branches, and spike/synaptic event counts for spiking branches side by side.

Contribution: A principled comparison across architectures and encodings, with spike statistics linking accuracy to event cost.