System Online: Available for Research & Engineering Roles

Building intelligent systems that bridge data and decision-making.

AI Student @ Air Uni · Automation Engineer @ Auxth

02 / Selected Works

The Index

95% accuracy on seizure detection from raw EEG signals.

End-to-end pipeline for preprocessing multi-channel EEG recordings, extracting spectral and temporal features, and classifying seizure events using a 1D-CNN + LSTM architecture. Achieved 95% accuracy on the CHB-MIT dataset with real-time inference capability.

# Pipeline overview
def seizure_pipeline(eeg_signal):
    # 1. Bandpass filter (0.5–50 Hz)
    filtered = bandpass_filter(eeg_signal, lo=0.5, hi=50)
    
    # 2. Feature extraction — spectral power + wavelet coefficients
    features = extract_features(filtered)
    
    # 3. Inference — 1D-CNN + BiLSTM classifier
    prediction = model.predict(features)  # → {seizure: 0.95, normal: 0.05}
    
    return prediction
View project →

Reduced manual processing latency by 40% at Auxth.

Designed and deployed event-driven automation workflows using n8n and FastAPI microservices. Integrated Slack, Notion, and internal APIs to orchestrate multi-step business processes. Containerized with Docker for reproducible deployments.

# Workflow trigger → process → notify
@app.post("/webhook/process")
async def handle_event(payload: EventPayload):
    # 1. Validate & enrich incoming data
    enriched = await enrich_payload(payload)
    
    # 2. Execute business logic pipeline
    result = await pipeline.run(enriched)  # latency: -40%
    
    # 3. Fan-out notifications
    await notify(channels=["slack", "notion"], data=result)
    
    return {"status": "processed", "id": result.id}
View project →

From-scratch implementation of multi-head attention and positional encoding.

Comprehensive implementation and analysis of the Transformer architecture from 'Attention Is All You Need.' Built multi-head self-attention, positional encoding (sinusoidal + learned), and layer normalization from scratch. Benchmarked against Hugging Face reference implementations.

class MultiHeadAttention(nn.Module):
    def __init__(self, d_model=512, n_heads=8):
        super().__init__()
        self.d_k = d_model // n_heads
        self.W_q = nn.Linear(d_model, d_model)
        self.W_k = nn.Linear(d_model, d_model)
        self.W_v = nn.Linear(d_model, d_model)
    
    def forward(self, Q, K, V, mask=None):
        # Scaled dot-product attention
        scores = (Q @ K.transpose(-2, -1)) / math.sqrt(self.d_k)
        if mask is not None:
            scores = scores.masked_fill(mask == 0, -1e9)
        attn = F.softmax(scores, dim=-1)
        return attn @ V
View project →

Ontology-driven reasoning engine for domain knowledge inference.

Built an OWL ontology for a specialized domain using Protégé, integrated SPARQL endpoints for querying, and implemented description logic reasoning for automated inference. Demonstrates formal knowledge engineering principles.

# SPARQL query — infer all instances of a class via reasoning
PREFIX ont: <http://example.org/ontology#>

SELECT ?entity ?type WHERE {
    ?entity rdf:type ?type .
    ?type rdfs:subClassOf* ont:IntelligentAgent .
    FILTER(?type != owl:Nothing)
}
ORDER BY ?type
View project →

03 / Capabilities

Knowledge Graph

Competencies organized by functional domain — not arbitrary percentages.

Inference

/5

Model development & training pipelines

PyTorchTensorFlowHugging Facescikit-learnONNX

Logic

/5

Formal reasoning & knowledge systems

KRRSPARQLProtégéOWL OntologiesDescription Logic

Data

/5

Processing, analysis & visualization

PandasNumPySciPyMatplotlibSQL

Deployment

/6

Infrastructure & delivery

DockerFastAPIn8nGitLinuxCI/CD

Language

/5

Programming & markup

PythonTypeScriptC++LaTeXBash

04 / Research & Writing

The Log

2025-12

Why Positional Encoding Matters More Than You Think

Dissecting sinusoidal vs. learned positional encodings and their impact on sequence modeling performance.

Read more →
2025-10

Attention Is Not Explanation: A Critical Reading

Exploring the gap between attention weights and true feature importance in Transformer models.

Read more →
2025-08

From Ontologies to Neural Networks: Bridging Symbolic and Sub-Symbolic AI

How Knowledge Representation and Reasoning (KRR) complements modern deep learning approaches.

Read more →
2025-05

Building Reliable Automation Pipelines with n8n and FastAPI

Lessons from production: error handling, idempotency, and observability in workflow automation.

Read more →