Skip to content

Latest commit

 

History

History
57 lines (40 loc) · 1.83 KB

File metadata and controls

57 lines (40 loc) · 1.83 KB

Contributing

Thank you for your interest in contributing to the Salesforce Deployment Failure Analyser.

Getting Started

git clone https://github.com/mssm-sftechstack/salesforce-devops-ai-assistant.git
cd salesforce-devops-ai-assistant
pip install pytest
python -m pytest tests/ -v

No other dependencies are needed to run the tool in mocked mode or to run the tests.

Ways to Contribute

Add a new scenario

Scenarios live in sample_data/ (input) and sample_outputs/ (expected output).

  1. Create sample_data/your_scenario.json following the input schema in the README
  2. Generate output by running: python main.py --input sample_data/your_scenario.json --live
  3. Save the result to sample_outputs/your_scenario_output.json
  4. Register the scenario in the SCENARIOS dict in main.py

Extend the input schema

  • Update validate_input() in main.py to accept new fields
  • Update validate_output() if the output schema changes
  • Add test cases in tests/test_validation.py
  • Update the schema table in README.md

Improve the prompt

The Claude prompt is CLAUDE_PROMPT in main.py. Changes to the prompt should be validated against all three preset scenarios using --live mode before submitting.

Fix a bug or add a roadmap item

Open an issue first to describe the change, then submit a pull request referencing the issue.

Running Tests

python -m pytest tests/ -v

All tests must pass before a pull request will be merged.

Code Style

  • Python 3.9+ compatible
  • No external dependencies in the core tool (mocked mode must stay dependency-free)
  • Keep main.py self-contained — helper logic belongs in the same file, not a separate module

Reporting Issues

Open a GitHub issue with:

  • The input JSON you used
  • The error or unexpected output
  • Whether you were using mocked or --live mode