I’m building an Agentic AI pipeline for hardware verification coverage generation. The pipeline uses a SharePoint agent to query targeted questions about specific SWI register fields across multiple documents — retrieving raw context chunks — which are then processed by an LLM to generate compliant SystemVerilog functional coverage artifacts.
No document parsing or ingestion is involved. The SharePoint agent handles all retrieval. I need guidance on orchestrating the agentic query loop, structuring prompts around raw context, and producing reliable structured output at scale (~10,000 fields).
The core pipeline is already underway. I’m looking for 1-hour daily guidance sessions over 3–4 weeks — not to build it for me, but to mentor, review, and unblock me as I build.
What You’ll Help With
∙ Designing the SharePoint agentic query strategy (what questions to ask per field, how to retrieve context from multiple documents)
∙ Orchestrating multi-document context merging per SWI field before LLM generation
∙ Prompt engineering for structured output (JSON → SystemVerilog coverage) from raw SharePoint context
∙ Scaling to ~10,000 fields with async LLM dispatch, caching, and deduplication
∙ Testing and observability (LangSmith, LLM-as-judge, golden datasets)
Ideal Background
∙ Strong Python skills with LangChain/LangGraph or similar agentic frameworks
∙ Experience building retrieval-augmented or agent-based LLM pipelines
∙ Familiarity with Microsoft 365 / SharePoint APIs or Graph API is a plus
∙ Hardware verification or EDA experience is a bonus — not required
Engagement
∙ Format: 1-hour daily video call or async code review
∙ Duration: 3–4 weeks | ~5 hrs/week
∙ Flexible scheduling across time zones
What I’m NOT Looking For
This is a mentorship engagement, not build-for-hire. I want someone who can guide architecture decisions and unblock me daily while I implement. Please share relevant shipped projects or GitHub links in your proposal.