Why Positional Encoding Matters More Than You Think
Dissecting sinusoidal vs. learned positional encodings and their impact on sequence modeling performance.
Read more →AI Student @ Air Uni · Automation Engineer @ Auxth
02 / Selected Works
4 projects
▸ 95% accuracy on seizure detection from raw EEG signals.
End-to-end pipeline for preprocessing multi-channel EEG recordings, extracting spectral and temporal features, and classifying seizure events using a 1D-CNN + LSTM architecture. Achieved 95% accuracy on the CHB-MIT dataset with real-time inference capability.
# Pipeline overview
def seizure_pipeline(eeg_signal):
# 1. Bandpass filter (0.5–50 Hz)
filtered = bandpass_filter(eeg_signal, lo=0.5, hi=50)
# 2. Feature extraction — spectral power + wavelet coefficients
features = extract_features(filtered)
# 3. Inference — 1D-CNN + BiLSTM classifier
prediction = model.predict(features) # → {seizure: 0.95, normal: 0.05}
return prediction▸ Reduced manual processing latency by 40% at Auxth.
Designed and deployed event-driven automation workflows using n8n and FastAPI microservices. Integrated Slack, Notion, and internal APIs to orchestrate multi-step business processes. Containerized with Docker for reproducible deployments.
# Workflow trigger → process → notify
@app.post("/webhook/process")
async def handle_event(payload: EventPayload):
# 1. Validate & enrich incoming data
enriched = await enrich_payload(payload)
# 2. Execute business logic pipeline
result = await pipeline.run(enriched) # latency: -40%
# 3. Fan-out notifications
await notify(channels=["slack", "notion"], data=result)
return {"status": "processed", "id": result.id}▸ From-scratch implementation of multi-head attention and positional encoding.
Comprehensive implementation and analysis of the Transformer architecture from 'Attention Is All You Need.' Built multi-head self-attention, positional encoding (sinusoidal + learned), and layer normalization from scratch. Benchmarked against Hugging Face reference implementations.
class MultiHeadAttention(nn.Module):
def __init__(self, d_model=512, n_heads=8):
super().__init__()
self.d_k = d_model // n_heads
self.W_q = nn.Linear(d_model, d_model)
self.W_k = nn.Linear(d_model, d_model)
self.W_v = nn.Linear(d_model, d_model)
def forward(self, Q, K, V, mask=None):
# Scaled dot-product attention
scores = (Q @ K.transpose(-2, -1)) / math.sqrt(self.d_k)
if mask is not None:
scores = scores.masked_fill(mask == 0, -1e9)
attn = F.softmax(scores, dim=-1)
return attn @ V▸ Ontology-driven reasoning engine for domain knowledge inference.
Built an OWL ontology for a specialized domain using Protégé, integrated SPARQL endpoints for querying, and implemented description logic reasoning for automated inference. Demonstrates formal knowledge engineering principles.
# SPARQL query — infer all instances of a class via reasoning
PREFIX ont: <http://example.org/ontology#>
SELECT ?entity ?type WHERE {
?entity rdf:type ?type .
?type rdfs:subClassOf* ont:IntelligentAgent .
FILTER(?type != owl:Nothing)
}
ORDER BY ?type03 / Capabilities
Competencies organized by functional domain — not arbitrary percentages.
Model development & training pipelines
Formal reasoning & knowledge systems
Processing, analysis & visualization
Infrastructure & delivery
Programming & markup
04 / Research & Writing
Dissecting sinusoidal vs. learned positional encodings and their impact on sequence modeling performance.
Read more →Exploring the gap between attention weights and true feature importance in Transformer models.
Read more →How Knowledge Representation and Reasoning (KRR) complements modern deep learning approaches.
Read more →Lessons from production: error handling, idempotency, and observability in workflow automation.
Read more →