JobGenie - Complete Job Creation Guide
Step-by-step guide to creating and configuring Jenkins jobs using JobGenie YAML definitions
JobGenie - Complete Job Creation Guide
Master JobGenie: Create, configure, and manage Jenkins jobs using simple YAML definitions
๐ Related Guides:
- Setup Guide - Jenkins setup and JobGenie integration
- JobGenie Documentation - Complete JobGenie reference
- Best Practices - Recommended patterns
๐ Table of Contents
- Overview
- Understanding JobGenie
- YAML Structure
- Creating Your First Job
- Job Types
- Parameters
- Configuration Options
- Advanced Features
- Examples
- Best Practices
- Troubleshooting
Overview
JobGenie transforms Jenkins job creation from manual UI configuration to simple YAML definitions. This guide covers everything you need to create and manage Jenkins jobs using JobGenie.
What Youโll Learn
- โ How JobGenie auto-detects YAML files
- โ YAML structure and syntax
- โ Creating different job types (Pipeline, Freestyle)
- โ Configuring parameters and build options
- โ Advanced features and best practices
Prerequisites
- Jenkins instance with JobGenie integrated (Setup Guide)
- Git repository with JobGenie-Pipelines structure
- Basic understanding of YAML syntax
- Access to create/edit YAML files in repository
Understanding JobGenie
How JobGenie Works
JobGenie uses an auto-discovery engine that:
- Scans Repository: Recursively searches for files ending with
-jobs.ymlor-jobs.yaml - Parses YAML: Uses SnakeYAML library to parse job definitions
- Extracts Metadata: Identifies organization, project, and environment from file path
- Generates Jobs: Uses JobDSL to create/update Jenkins jobs automatically
- Manages Lifecycle: Deletes jobs that are removed from YAML files
File Path Convention
Job definition files must follow this naming pattern:
{organization}/{project}/{environment}/jobs/{project}-{environment}-jobs.yml
Examples:
amazon/mcloud/prod/jobs/mcloud-prod-jobs.ymlamazon/mcloud/nonprod/jobs/mcloud-nonprod-jobs.ymlglobal/common/prod/jobs/common-prod-jobs.yml
Job Location in Jenkins
Generated jobs are created at:
{organization}/{project}/{environment}/deploy/{GROUP}/{ENV}/{JOB_NAME}
Example:
- YAML:
amazon/mcloud/prod/jobs/mcloud-prod-jobs.yml - Job Name:
user-service - Default GROUP:
v2 - Default ENV:
prod - Jenkins Path:
amazon/mcloud/prod/deploy/v2/prod/user-service
YAML Structure
Basic Structure
jobgenie:
default: # Default configurations (applied to all jobs)
HOME_DIR: prod # Home directory (prod/nonprod)
GROUP: "v2" # Version group
ENV: "prod" # Environment name
jobs: # List of job definitions
- NAME: "job-name" # Job name (required)
PARAMETERS: [] # Build parameters (optional)
CONFIGS: {} # Job configurations (required)
Complete Example
jobgenie:
default:
GROUP: "v2"
ENV: "prod"
jobs:
- NAME: "user-service"
PARAMETERS:
- { name: 'GitBranch', string: 'production', description: 'Production branch.' }
- { name: 'DeployVersion', string: 'latest', description: 'Version to deploy.' }
CONFIGS:
APP_REPO: "user-service"
APP_BRANCH: "production"
DOCKER_BUILD_ARGS: "ENV,SERVICE"
Creating Your First Job
Step 1: Create Directory Structure
cd JobGenie-Pipelines
mkdir -p amazon/myproject/prod/jobs
mkdir -p amazon/myproject/nonprod/jobs
Step 2: Create Job Definition File
vim amazon/myproject/prod/jobs/myproject-prod-jobs.yml
Step 3: Define Your Job
Add the following content:
jobgenie:
default:
HOME_DIR: prod
GROUP: "v4"
ENV: "prod"
jobs:
- NAME: "hello-world-service"
CONFIGS:
JOB_TYPE: "freestyle"
SKIP_GIT: true
SCRIPT: |-
echo "Hello World"
echo "This is my first JobGenie job!"
Step 4: Commit and Push
git add amazon/myproject/prod/jobs/myproject-prod-jobs.yml
git commit -m "Add hello-world-service job definition"
git push origin main
Step 5: Run Seed Job
- Navigate to Jenkins:
0-JobGenie-Generator - Click Build with Parameters
- Set
GitBranchtomain - Click Build
- Verify job creation in Jenkins
Step 6: Verify Job Creation
Check Jenkins for the newly created job:
- Path:
amazon/myproject/prod/deploy/v2/prod/hello-world-service - Status: Job should be ready to build immediately
- Test: Run the job to verify it works
Job Types
Pipeline Jobs
Pipeline jobs use Jenkinsfiles for execution. Theyโre the default job type (no JOB_TYPE needed).
jobgenie:
default:
HOME_DIR: prod
GROUP: "v4"
ENV: "prod"
jobs:
- NAME: "user-authentication-service"
PARAMETERS:
- { name: 'GitBranch', string: 'production', description: 'Production application git branch.' }
- { name: 'DeployVersion', string: 'latest', description: 'Application version to deploy.' }
CONFIGS:
APP_REPO: "user-auth-service"
APP_BRANCH: "production"
DOCKER_BUILD_ARGS: "ENV,SERVICE"
SSH_KEYS: "default:/opt/jenkins/keys/prod_key_rsa"
# Default Jenkinsfile Location:
# {organization}/{project}/{environment}/jenkinsfiles/${JENKINSFILE_DIR}/Jenkinsfile
JENKINSFILE_DIR: "default"
Freestyle Jobs
Freestyle jobs execute shell scripts directly. Set JOB_TYPE: "freestyle" in CONFIGS.
jobgenie:
default:
HOME_DIR: prod
GROUP: "v4"
ENV: "prod"
jobs:
- NAME: "database-migration-runner"
PARAMETERS:
- { name: 'GitBranch', string: 'master', description: 'Migration scripts git branch.' }
- { name: 'MigrationVersion', string: '', description: 'Specific migration version to run (leave empty for latest).' }
- { name: 'DryRun', bool: false, description: 'Perform dry run without applying changes.' }
CONFIGS:
JOB_TYPE: "freestyle"
SERVICE: "database-migration-runner"
APP_REPO: "database-migrations"
APP_BRANCH: "master"
SCRIPT: |
echo "Running database migrations..."
echo "Branch: ${GitBranch}"
echo "Version: ${MigrationVersion}"
# Your migration logic here
echo "Migrations completed successfully"
Jobs Without Git Checkout
For jobs that donโt need source code, use SKIP_GIT: true:
jobgenie:
default:
HOME_DIR: prod
GROUP: "v4"
ENV: "prod"
jobs:
- NAME: "jenkins-configuration-sync"
PARAMETERS:
- { name: 'ConfigBranch', string: 'master', description: 'Configuration branch to sync.' }
CONFIGS:
JOB_TYPE: "freestyle"
SKIP_GIT: true
SKIP_ENJECTED_VARS: true
DSL_SCRIPT: |
import io.jenkins.plugins.casc.ConfigurationAsCode;
ConfigurationAsCode.get().configure()
println("Configuration sync completed")
Parameters
String Parameters
Text input fields for user input:
PARAMETERS:
- { name: 'GitBranch', string: 'main', description: 'Git branch to build.' }
- { name: 'DeployVersion', string: 'latest', description: 'Version to deploy.' }
Choice Parameters
Dropdown selections with predefined options:
PARAMETERS:
- { name: 'Environment', choices: ['prod', 'prod-dr', 'staging'], description: 'Target environment.' }
- { name: 'TerraformAction', choices: ['plan', 'apply', 'destroy'], description: 'Terraform action.' }
Boolean Parameters
Checkbox options for true/false values:
PARAMETERS:
- { name: 'DryRun', bool: false, description: 'Perform dry run.' }
- { name: 'RollbackOnFailure', bool: true, description: 'Auto-rollback on failure.' }
Using Parameters in Scripts
Parameters are available as environment variables:
CONFIGS:
SCRIPT: |
echo "Branch: ${GitBranch}"
echo "Environment: ${Environment}"
echo "Dry Run: ${DryRun}"
Configuration Options
Common CONFIGS
| Parameter | Description | Example |
|---|---|---|
APP_REPO |
Application repository name | "user-service" |
APP_BRANCH |
Default application branch | "production" |
JOB_TYPE |
Job type (pipeline or freestyle) |
"freestyle" |
DOCKER_BUILD_ARGS |
Docker build arguments | "ENV,SERVICE" |
DOCKERFILE_PATH |
Custom Dockerfile path | "services/order/Dockerfile" |
SSH_KEYS |
SSH key path | "default:/opt/jenkins/keys/key_rsa" |
SCRIPT |
Shell script (freestyle only) | "echo 'Hello'" |
SKIP_GIT |
Skip Git SCM checkout | true or false |
JENKINSFILE_DIR |
Jenkinsfile directory | "default" or "sample" |
PIPELINE_PATH |
Custom job path | "shared" |
CONFIG_REPO |
Config repository URL | "git@github.com:User/Repo.git" |
CONFIG_BRANCH |
Config repository branch | "${GitBranch}" |
Environment Variables
All CONFIGS are automatically injected as environment variables:
CONFIGS:
APP_REPO: "my-service"
AWS_REGION: "ap-south-1"
EKS_CLUSTER: "my-cluster"
These become available as $APP_REPO, $AWS_REGION, $EKS_CLUSTER in your pipelines/scripts.
Advanced Features
Custom Jenkinsfile Path
Specify custom Jenkinsfile location:
CONFIGS:
JENKINSFILE_DIR: "custom" # Uses: {org}/{project}/{env}/jenkinsfiles/custom/Jenkinsfile
APP_REPO: "my-repo"
Custom Job Path
Create jobs in custom folder structure:
jobs:
- NAME: "shared/notification-service"
CONFIGS:
APP_REPO: "notification-service"
PIPELINE_PATH: "shared"
Multi-Environment Jobs
Define jobs for multiple environments:
Non-Production (amazon/myproject/nonprod/jobs/myproject-nonprod-jobs.yml):
jobgenie:
default:
GROUP: "v2"
ENV: "stage"
jobs:
- NAME: "my-service"
CONFIGS:
APP_REPO: "my-service"
APP_BRANCH: "develop"
Production (amazon/myproject/prod/jobs/myproject-prod-jobs.yml):
jobgenie:
default:
GROUP: "v2"
ENV: "prod"
jobs:
- NAME: "my-service"
CONFIGS:
APP_REPO: "my-service"
APP_BRANCH: "production"
DSL Script Jobs
Execute Groovy DSL scripts directly:
jobs:
- NAME: "jenkins-config-sync"
CONFIGS:
JOB_TYPE: "freestyle"
SKIP_GIT: true
SKIP_ENJECTED_VARS: true
DSL_SCRIPT: |
import io.jenkins.plugins.casc.ConfigurationAsCode;
ConfigurationAsCode.get().configure()
println("Configuration sync completed")
Examples
Example 1: Simple Microservice (Pipeline Job)
jobgenie:
default:
HOME_DIR: prod
GROUP: "v4"
ENV: "prod"
jobs:
- NAME: "user-authentication-service"
PARAMETERS:
- { name: 'GitBranch', string: 'production', description: 'Production application git branch.' }
- { name: 'DeployVersion', string: 'latest', description: 'Application version to deploy.' }
CONFIGS:
APP_REPO: "user-auth-service"
APP_BRANCH: "production"
DOCKER_BUILD_ARGS: "ENV,SERVICE"
SSH_KEYS: "default:/opt/jenkins/keys/prod_key_rsa"
Example 2: Infrastructure Job (Freestyle)
jobgenie:
default:
HOME_DIR: prod
GROUP: "v4"
ENV: "prod"
jobs:
- NAME: "terraform-infrastructure-deploy"
PARAMETERS:
- { name: 'GitBranch', string: 'main', description: 'Terraform code git branch.' }
- { name: 'TerraformAction', choices: ['plan', 'apply', 'destroy'], description: 'Terraform action to execute.' }
- { name: 'Region', string: 'us-east-1', description: 'AWS region for deployment.' }
CONFIGS:
JOB_TYPE: "freestyle"
SERVICE: "terraform-infrastructure-deploy"
APP_REPO: "terraform-infrastructure"
APP_BRANCH: "main"
SCRIPT: |
echo "Executing Terraform ${TerraformAction} in ${Region}"
terraform init
terraform ${TerraformAction}
Example 3: Database Operations (Freestyle)
jobgenie:
default:
HOME_DIR: prod
GROUP: "v4"
ENV: "prod"
jobs:
- NAME: "database-backup-job"
PARAMETERS:
- { name: 'DatabaseType', choices: ['postgresql', 'mysql', 'mongodb'], description: 'Database type to backup.' }
- { name: 'BackupType', choices: ['full', 'incremental'], description: 'Type of backup to perform.' }
CONFIGS:
JOB_TYPE: "freestyle"
SERVICE: "database-backup-job"
APP_REPO: "backup-scripts"
APP_BRANCH: "master"
SCRIPT: |
echo "Performing ${BackupType} backup for ${DatabaseType}"
# Backup logic here
Example 4: Seed Job
Seed jobs generate other jobs. Use SEED_JOB: true:
jobgenie:
default:
HOME_DIR: prod
GROUP: "v4"
ENV: "prod"
jobs:
- NAME: "0-poc-DevOps"
PARAMETERS:
- { name: 'GitBranch', string: 'master', description: 'Dev application git branch.' }
CONFIGS:
SEED_JOB: true
CONFIG_BRANCH: "${GitBranch}"
JENKINSFILE: "JobGenie/Jenkinsfile"
Example 5: Job with Custom Dockerfile Path
jobgenie:
default:
HOME_DIR: prod
GROUP: "v4"
ENV: "prod"
jobs:
- NAME: "order-processing-api"
PARAMETERS:
- { name: 'GitBranch', string: 'main', description: 'Application git branch.' }
CONFIGS:
APP_REPO: "ecommerce-order-api"
APP_BRANCH: "main"
DOCKER_BUILD_ARGS: "ENV"
DOCKERFILE_PATH: "services/order/Dockerfile"
SSH_KEYS: "default:/opt/jenkins/keys/prod_key_rsa"
Example 6: Job with Custom Pipeline Path
jobgenie:
default:
HOME_DIR: prod
GROUP: "v4"
ENV: "prod"
jobs:
- NAME: "notification-service"
PARAMETERS:
- { name: 'GitBranch', string: 'master', description: 'Application git branch.' }
CONFIGS:
APP_REPO: "notification-engine"
APP_BRANCH: "master"
DOCKER_BUILD_ARGS: "ENV"
PIPELINE_PATH: "shared"
Example 7: Disabled Job
Create a job in disabled state:
jobgenie:
default:
HOME_DIR: prod
GROUP: "v4"
ENV: "prod"
jobs:
- NAME: "user-authentication-service"
PARAMETERS:
- { name: 'GitBranch', string: 'production', description: 'Production application git branch.' }
CONFIGS:
APP_REPO: "user-auth-service"
APP_BRANCH: "production"
DOCKER_BUILD_ARGS: "ENV,SERVICE"
DISABLED: true # Job will be created but disabled
Example 8: Multi-Parameter Job with Custom Template
jobgenie:
default:
HOME_DIR: prod
GROUP: "v4"
ENV: "prod"
jobs:
- NAME: "microservices-orchestrator"
PARAMETERS:
- { name: 'TAG', string: 'v3.2.1', description: 'Application version tag.' }
- { name: 'Services', string: 'all', description: 'Comma-separated list of services to deploy (or "all").' }
- { name: 'RollbackOnFailure', bool: true, description: 'Automatically rollback on deployment failure.' }
CONFIGS:
SERVICE: "microservices-orchestrator"
CICD_TEMPLATE_NAME: "microservices-deployment-template"
APP_REPO: "microservices-platform"
APP_BRANCH: "production"
Best Practices
1. Naming Conventions
- โ
Use descriptive job names:
user-authentication-servicenotuser-svc - โ Follow kebab-case for job names
- โ Use consistent naming across environments
2. YAML Organization
- โ One YAML file per environment
- โ Group related jobs together
- โ Use comments to document complex configurations
3. Parameter Design
- โ Provide clear descriptions for all parameters
- โ Use choice parameters for limited options
- โ Set sensible defaults for string parameters
4. Configuration Management
- โ Store sensitive data in Jenkins credentials, not YAML
- โ Use environment variables for environment-specific values
- โ Leverage default section for common configurations
5. Version Control
- โ Commit YAML changes via pull requests
- โ Review job definitions before merging
- โ Tag releases for production job definitions
6. Testing
- โ Test job definitions in non-production first
- โ Validate YAML syntax before committing
- โ Verify job creation after seed job runs
Troubleshooting
Jobs Not Created
Problem: Seed job runs but no jobs are created.
Solutions:
- Verify YAML file naming: must end with
-jobs.ymlor-jobs.yaml - Check YAML syntax is valid (use
yamllint) - Review seed job console output for errors
- Ensure file path matches expected structure:
{org}/{project}/{env}/jobs/{file}.yml - Verify
jobgenie:key is present at root level
YAML Parsing Errors
Problem: Seed job fails with YAML parsing errors.
Solutions:
- Validate YAML syntax using online validators or
yamllint - Check indentation (must be spaces, not tabs)
- Verify all required fields are present (
default,jobs) - Review console output for specific line numbers
Jobs Created in Wrong Location
Problem: Jobs appear in unexpected folders.
Solutions:
- Check
GROUPandENVin default section - Verify
PIPELINE_PATHif using custom paths - Review job name structure (slashes create subfolders)
Parameters Not Appearing
Problem: Build parameters donโt show up in job.
Solutions:
- Verify parameter syntax is correct
- Check parameter names donโt conflict with reserved words
- Ensure PARAMETERS is a list, not a map
Environment Variables Not Available
Problem: CONFIGS values not available as environment variables.
Solutions:
- Ensure
SKIP_ENJECTED_VARSis not set totrue - Check that CONFIGS values donโt contain newlines (use SCRIPT for multi-line)
- Verify job type supports environment variables
Getting More Help
- Check the JobGenie Documentation for detailed reference
- Review Setup Guide for integration help
- Contact: HarryTheDevOpsGuy@gmail.com
Next Steps
Now that you can create jobs with JobGenie:
- Explore Advanced Features: Try custom paths, multi-environment setups
- Review Best Practices: Check Best Practices Guide
- Understand Architecture: Read Architecture Documentation
- Learn More: See Complete JobGenie Reference
๐ผ Need Professional Help?
Looking for expert assistance with JobGenie job creation, CI/CD pipeline development, or DevOps automation?
| Contact: HarryTheDevOpsGuy@gmail.com | Portfolio |
Services: DevOps as a Service Guide
Built with โค๏ธ by the DevOps Team
โSoch Wahi, Approach Naiโ - Same Vision, New Approach
Related Documentation
More from Tools
Related by Tags
No related documentation found by tags
Related Blog Posts
JobGenie: Transform Jenkins Job Creation with Jobs as Code
Learn how to integrate JobGenie into your existing Jenkins instance to create jobs as code using ...
OpenResty Production Setup: Supercharge with Lua-Based Metrics and Monitoring
Complete guide to deploying production-ready OpenResty with advanced Lua-based metrics collection...
KEDA on EKS: Complete Guide to Event-Driven Autoscaling with Real-World Examples
Master KEDA implementation on Amazon EKS with comprehensive examples for multiple scaling scenari...
Related Tools & Projects
BG Deployer
Automated blue-green deployment for zero-downtime AWS releases
mCert
SSL certificate monitoring with Slack/email alerts & Telegram
mTracker
Real-time Linux user activity monitoring with Slack notifications
mWatcher
Server health monitoring for CPU, memory, disk with alerting
gCrypt
Git-crypt wrapper for secure file encryption & access management
Interactive Tools
AWS VPC Designer, EKS Cost Calculator, and more utilities
External Resources
Quick Actions
Found this helpful?
Help us improve this documentation by sharing your feedback or suggesting improvements.