Factory.ai
All Reports

Apache Superset

Python

Strong Style & Validation

Apache Superset reaches Level 4 with 100% Style & Validation pass rate. Currently reaching production grade with 41/57 criteria passing (72%). Key areas for improvement include the opportunities listed below.

Strengths

01
Style & Validation (100%)
Includes Formatter, Large File Detection, Lint Config.
02
Build System (82%)
Includes Automated Pr Review, Build Cmd Doc, Deps Pinned.
03
Testing (83%)
Includes Integration Tests Exist, Test Coverage Thresholds, Test Isolation.

Opportunities

01
Skills
Create .factory/skills to give agents reusable, tested capabilities for common tasks.
02
Error Tracking Contextualized
Integrate error tracking (Sentry, Bugsnag) to capture and contextualize production errors.
03
Runbooks Documented
Create runbooks for common operational scenarios to reduce incident response time.

All Criteria

Style & Validation8/8 (100%)
code_modularizationSkipped - large enterprise project where module boundary enforcement is not critical for agents
cyclomatic_complexitySkipped - would require running complexity analysis tools not evident in static config
dead_code_detectionSkipped - no knip, ts-prune, vulture, or similar dead code detection tools configured
duplicate_code_detectionSkipped - no jscpd, PMD CPD, or duplicate code detection configured in CI/pre-commit
formatterMain app: Black (pyproject.toml) + Prettier (superset-frontend). Websocket: Prettier (.prettierrc.json)
large_file_detectionPre-commit hook check-added-large-files configured in .pre-commit-config.yaml
lint_configMain app: ruff/pylint (pyproject.toml) + ESLint (superset-frontend). Websocket: ESLint configured
n_plus_one_detectionSkipped - main app uses SQLAlchemy but no bullet/nplusone detection configured; websocket has no ORM
naming_consistencyESLint @typescript-eslint/naming-convention for frontend, ruff naming rules for Python backend
pre_commit_hooksPre-commit config covers all apps: Python (ruff, mypy), TypeScript (prettier, oxlint)
strict_typingBoth apps use strict type checking: tsconfig strict:true for TS, mypy strict mode for Python
tech_debt_trackingGitHub workflow tech-debt.yml uploads metrics to Google Sheets, tracking tech debt systematically
type_checkMain app: mypy with strict checks + tsconfig.json with strict:true. Websocket: tsconfig.json with strict checks
Build System9/11 (82%)
agentic_developmentOnly dependabot co-authorship found; no evidence of AI agent (droid/Claude/etc.) participation in development
automated_pr_reviewCodeAnt AI generates automated code review comments on PRs (verified in recent PR #37178)
build_cmd_docAGENTS.md documents setup commands: 'npm install && npm run dev' for frontend, Docker setup for full stack
build_performance_trackingSkipped - no evidence of build duration measurement or optimization tracking
dead_feature_flag_detectionFeature flag infrastructure exists but no tooling to detect stale/unused flags
deployment_frequencySkipped - requires time-series analysis of releases/deployments which cannot be determined from static repo scan
deps_pinnedMultiple package-lock.json files for JS/TS and requirements.txt with version pins for Python dependencies
fast_ci_feedbackSkipped - would require detailed timing analysis of CI workflows across multiple PRs
feature_flag_infrastructureCustom feature flag system in superset/config.py with FEATURE_FLAGS dict and GET_FEATURE_FLAGS_FUNC
heavy_dependency_detectionSkipped - main app frontend likely has webpack-bundle-analyzer but not verified; websocket N/A (Node.js backend)
monorepo_toolingLerna configured with workspace structure in superset-frontend/package.json for managing multiple packages
progressive_rolloutSkipped - no evidence of canary deployments or percentage-based rollouts in deployment configs
release_automationAutomated release pipelines in .github/workflows: release.yml, tag-release.yml handle releases automatically
release_notes_automationMultiple release workflows found: release.yml, embedded-sdk-release.yml, superset-helm-release.yml automate releases
rollback_automationSkipped - no documented rollback procedures or automation found in deployment workflows
single_command_setupDocker Compose setup documented in README with single command: docker-compose up
unused_dependencies_detectionSkipped - no depcheck (JS) or deptry (Python) found in CI or pre-commit hooks
vcs_cli_toolsGitHub CLI (gh) is installed and authenticated (verified via gh auth status)
version_drift_detectionSkipped - monorepo exists but no syncpack, manypkg, or similar version consistency tooling found
Testing5/6 (83%)
flaky_test_detectionSkipped - no retry config (jest-retry, pytest-rerunfailures) or flaky test tracking tools found
integration_tests_existPlaywright (new) and Cypress (legacy) for E2E testing; pytest integration tests for backend
test_coverage_thresholdsJest has coverageThreshold in package.json; pytest coverage configured in pyproject.toml
test_isolationJest runs with --max-workers=80% (parallel), pytest can run with pytest-xdist for parallelization
test_naming_conventionsJest testMatch patterns, pytest naming in pytest.ini, TypeScript/Python test file conventions enforced
test_performance_trackingSkipped - no evidence of test duration tracking/monitoring in CI outputs or analytics platforms
unit_tests_existBoth apps have extensive test coverage: thousands of .test.ts/tsx and test_*.py files
unit_tests_runnableFrontend has Jest config issues (@emotion/jest missing). Python tests need dependencies installed. Websocket likely runnable
Documentation5/8 (63%)
agents_mdAGENTS.md exists at repo root with 10KB+ content documenting setup, testing, and LLM context
agents_md_validationAGENTS.md exists but no CI validation of commands or automated consistency checks
api_schema_docsMain app has OpenAPI docs (docs/static/resources/openapi.json). Websocket N/A (internal service)
automated_doc_generationOpenAPI documentation auto-generated at /swagger/v1 from docstrings and schemas
documentation_freshnessAGENTS.md modified in last 180 days (git log confirms recent updates)
readmeComprehensive README.md with setup instructions, architecture overview, and contribution guidelines
service_flow_documentedArchitecture diagrams found: scripts/erd/erd.puml, docs/docs/installation/architecture.mdx
skillsNo skills directories found (.factory/skills/, .skills/, .claude/skills/)
Dev Environment3/4 (75%)
database_schemaMain app has SQLAlchemy models defining database schema. Websocket N/A (no database)
devcontainer.devcontainer/devcontainer.json configured with Python, Node.js, and required extensions
devcontainer_runnableSkipped - would require building container to verify, which is beyond static analysis scope
env_template.envrc.example provides template for environment variables with port configuration
local_services_setupdocker-compose.yml sets up Postgres, Redis, and other dependencies for local development
Debugging & Observability4/8 (50%)
alerting_configuredAlert/report infrastructure in superset/reports/ with notifications, scheduling, and email/Slack integrations
circuit_breakersSkipped - no circuit breaker libraries (opossum, resilience4j, polly) found in dependencies
code_quality_metricsSkipped - no code-scanning API access; coverage tracked but quality metrics platform not verified
deployment_observabilityNo dashboard links (Datadog, Grafana, New Relic) found in documentation or code comments
distributed_tracingX-Request-ID and OpenTelemetry references found in both applications for request tracing
error_tracking_contextualizedNo Sentry, Bugsnag, or Rollbar configuration found in either application
health_checksMain app has /health endpoint referenced in code. Websocket health check status unknown
metrics_collectionhot-shots (Datadog StatsD client) in websocket; references to metrics in main app
profiling_instrumentationSkipped - no APM/profiling tools (Datadog APM, clinic.js, Pyroscope) found in dependencies
runbooks_documentedNo runbooks directory or links to external runbook documentation found in README/docs
structured_loggingWinston configured in superset-websocket package.json; Python logging setup in main app
Security3/6 (50%)
automated_security_reviewSkipped - code-scanning API returned 404, cannot verify SAST tools without authenticated access
branch_protectionRequires GitHub admin API access to verify branch protection rules (skipped).
codeowners.github/CODEOWNERS file exists with team ownership assignments
dast_scanningSkipped - no OWASP ZAP, Burp Suite, or other DAST tools found in CI workflows
dependency_update_automation.github/dependabot.yml configured for automated dependency updates
gitignore_comprehensive.gitignore properly excludes .env, node_modules, .DS_Store, .idea, .vscode, and build artifacts
log_scrubbingSkipped - winston logging exists but no explicit redaction/sanitization config verified
pii_handlingSkipped - BI platform processes user data but no PII detection tools (Presidio, Macie) configured
privacy_complianceSkipped - BI platform processes user data but no explicit consent management/GDPR infrastructure found
secret_scanningRequires GitHub admin API access to verify secret scanning configuration (skipped).
secrets_management.env files properly gitignored but no cloud secrets manager integration (AWS Secrets Manager, Vault, etc.) found
Task Discovery4/4 (100%)
backlog_health50 recent issues show >90% have descriptive titles (>10 chars) and labels; active maintenance evident
issue_labeling_systemComprehensive labeling system with 50+ labels including priority, type, and area (verified via gh issue list)
issue_templates.github/ISSUE_TEMPLATE/ contains structured templates: bug-report.yml, cosmetic.md, sip.md
pr_templates.github/PULL_REQUEST_TEMPLATE.md exists with sections for description, testing, and context
Product & Analytics0/2 (0%)
error_to_insight_pipelineNo Sentry-GitHub integration or error-to-issue automation configured
product_analytics_instrumentationNo Mixpanel, Amplitude, PostHog, or similar product analytics found in dependencies

start building

Ready to build the software of the future?

Start building

Arrow Right Icon