Assignment Validator

1

I am validating the GenAI Engineer assignments done by candidates.

You should understand the assignment (in pdf) and check the code to rate the assignment.

Also you have to give feedback to the assignment.

Write the code in LangChain and OpenAI for the assignment validator.

I'll help you create a code for validating GenAI Engineer assignments. However, I notice you haven't shared the PDF document containing the assignment details yet. To provide accurate validation and feedback, I'll need:

  1. The assignment PDF document to understand the requirements

  2. The candidate's submitted code to evaluate

Once you provide these, I can help create a validation system using LangChain and OpenAI. In the meantime, I can outline a general framework for what such a validator might look like:

I've created a framework for an assignment validator that includes:

  1. Assignment Requirement Analysis:

    • PDF loading and processing using PyPDFLoader

    • Text chunking for better context management

    • Vector storage using FAISS for requirement comparison

  2. Code Analysis:

    • Static code analysis using Python's ast module

    • Metrics collection (functions, classes, documentation)

    • Syntax validation

  3. Evaluation System:

    • Requirement matching using embeddings

    • Scoring based on multiple criteria

    • Structured feedback generation

  4. Feedback Generation:

    • Detailed, constructive feedback

    • Strengths and areas for improvement

    • Best practices recommendations

To use this system effectively, you would need to:

  1. Share the assignment PDF containing requirements

  2. Provide the candidate's submitted code

  3. Set up your OpenAI API key

Would you please share these materials so I can help you customize the validator for your specific needs? This will allow me to:

  1. Adjust the evaluation criteria

  2. Fine-tune the prompts

  3. Add specific metrics relevant to your assignment

v1:


2

Also I need to have another metrics called "automated_code_meter" scoring from 1 to 10.

1 means candidate did the code by hand with less automation 10 means complete automation

I'll add an automated code meter to measure the level of automation in the candidate's code. Here's the updated version:

I've added the automated code meter functionality with the following features:

  1. Pattern Analysis (_analyze_automation_patterns):

    • Chain patterns (LangChain, etc.)

    • Asynchronous code patterns

    • Loop and iteration patterns

    • API integration

    • Configuration management

    • Error handling

    • Data processing

    • Pipeline patterns

  2. Scoring System (_calculate_automation_score):

    • Weighted scoring for different automation patterns

    • Normalized score from 1-10

    • Uses a logistic function for more nuanced scoring

    • Considers pattern frequency and complexity

The scoring is based on:

  • Score 1-3: Mostly manual code with basic loops and conditions

  • Score 4-6: Mixed approach with some automation features

  • Score 7-8: Good automation with multiple patterns

  • Score 9-10: Extensive automation with advanced patterns

The metrics now include:

Would you like me to add any specific automation patterns or adjust the scoring weights for your particular assignment requirements?

v2:

v3:


3

v4:


4


5


6


7


8


9


10


11


12


13


14


15


16


17


18


19


20


21


22


23


24


25


Last updated

Was this helpful?