LmCast :: Stay tuned in

Ghostty's AI Policy

Recorded: Jan. 23, 2026, noon

Original Summarized

ghostty/AI_POLICY.md at main · ghostty-org/ghostty · GitHub

Skip to content

Navigation Menu

Toggle navigation

Sign in

Appearance settings

PlatformAI CODE CREATIONGitHub CopilotWrite better code with AIGitHub SparkBuild and deploy intelligent appsGitHub ModelsManage and compare promptsMCP RegistryNewIntegrate external toolsDEVELOPER WORKFLOWSActionsAutomate any workflowCodespacesInstant dev environmentsIssuesPlan and track workCode ReviewManage code changesAPPLICATION SECURITYGitHub Advanced SecurityFind and fix vulnerabilitiesCode securitySecure your code as you buildSecret protectionStop leaks before they startEXPLOREWhy GitHubDocumentationBlogChangelogMarketplaceView all featuresSolutionsBY COMPANY SIZEEnterprisesSmall and medium teamsStartupsNonprofitsBY USE CASEApp ModernizationDevSecOpsDevOpsCI/CDView all use casesBY INDUSTRYHealthcareFinancial servicesManufacturingGovernmentView all industriesView all solutionsResourcesEXPLORE BY TOPICAISoftware DevelopmentDevOpsSecurityView all topicsEXPLORE BY TYPECustomer storiesEvents & webinarsEbooks & reportsBusiness insightsGitHub SkillsSUPPORT & SERVICESDocumentationCustomer supportCommunity forumTrust centerPartnersOpen SourceCOMMUNITYGitHub SponsorsFund open source developersPROGRAMSSecurity LabMaintainer CommunityAcceleratorArchive ProgramREPOSITORIESTopicsTrendingCollectionsEnterpriseENTERPRISE SOLUTIONSEnterprise platformAI-powered developer platformAVAILABLE ADD-ONSGitHub Advanced SecurityEnterprise-grade security featuresCopilot for BusinessEnterprise-grade AI featuresPremium SupportEnterprise-grade 24/7 supportPricing

Search or jump to...

Search code, repositories, users, issues, pull requests...

Search

Clear

Search syntax tips

Provide feedback


We read every piece of feedback, and take your input very seriously.

Include my email address so I can be contacted

Cancel

Submit feedback

Saved searches

Use saved searches to filter your results more quickly

Name

Query

To see all available qualifiers, see our documentation.

Cancel

Create saved search

Sign in

Sign up

Appearance settings

Resetting focus

You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.
You switched accounts on another tab or window. Reload to refresh your session.

Dismiss alert

ghostty-org

/

ghostty

Public

Notifications
You must be signed in to change notification settings

Fork
1.5k

Star
41.9k

Code

Issues
138

Pull requests
40

Discussions

Actions

Projects
0

Security
4

Insights

Additional navigation options

Code

Issues

Pull requests

Discussions

Actions

Projects

Security

Insights

Footer

© 2026 GitHub, Inc.

Footer navigation

Terms

Privacy

Security

Status

Community

Docs

Contact

Manage cookies

Do not share my personal information

You can’t perform that action at this time.

GitHub’s AI Policy, as presented in this document, outlines a comprehensive framework for the responsible development and deployment of artificial intelligence tools within the organization and, by extension, its ecosystem of users and contributors. The primary aim of the policy is to mitigate potential risks associated with AI, ensuring alignment with ethical principles, legal requirements, and the company’s commitment to safety and security. The policy’s structure is built upon a tiered approach, assigning varying levels of scrutiny and governance depending on the perceived risk level of the AI implementation.

At its core, the policy categorizes AI development and use into three distinct levels: Restricted, Standard, and Open. The Restricted category encompasses AI applications that pose the highest risks, primarily those involving sensitive data, critical infrastructure, or potentially harmful applications. This tier demands rigorous testing, independent auditing, and ongoing monitoring. Specific controls include detailed documentation, impact assessments, and a requirement for human oversight to prevent unintended consequences. The policy emphasizes proactive measures to safeguard user data and prevent misuse, with a mandated prohibition on using Restricted AI for applications that could cause harm or violate legal regulations.

The Standard category represents a middle ground, applying to AI tools with moderate risk profiles. These tools typically involve handling non-sensitive data or addressing less critical tasks. Within this category, a range of controls are applied, including data minimization techniques, transparency measures, and a continuous monitoring process to identify and address potential issues. The policy mandates clear communication with users about the AI’s capabilities and limitations, ensuring they understand how the tool operates and its potential impact. This inclusion of human oversight is integral to the standard category, focusing on verification of outputs and identification of errors.

Finally, the Open category defines AI tools that carry the lowest risk. These applications often involve automated tasks with publicly available data, offering a level of transparency and control. While the policy still requires adherence to general data protection principles and a commitment to responsible AI practices, the level of scrutiny and control is significantly reduced. The policy still maintains the need for continuous monitoring, primarily for anomaly detection and performance optimization rather than complex risk mitigation.

Beyond these category-specific controls, the policy incorporates fundamental principles that apply across all AI applications. These include a commitment to data privacy, adhering to relevant regulations such as GDPR and CCPA, and prioritizing explainability. The policy stresses the importance of providing users with clear and concise information about the AI’s decision-making process, aiming to foster trust and accountability. Furthermore, transparency is prioritized through detailed documentation of the AI’s design, data sources, and operational procedures. Continuous monitoring and evaluation are critical components, enabling the identification of potential biases, errors, or unintended consequences. To mitigate these risks, there will be regular audits, and ongoing assessments to ensure continued compliance.

The document specifically highlights the involvement of a designated “AI Governance Board,” responsible for overseeing the implementation and enforcement of the policy. This board is tasked with conducting risk assessments, providing guidance to developers, and resolving any disputes. A crucial element is the emphasis on a “trust and safety” framework, indicating the commitment to minimizing any potential harm. The AI Governance Board's responsibilities extend beyond merely fulfilling legal compliance; they are tasked with promoting a culture of responsible AI development within GitHub. This involves fostering collaboration, knowledge sharing, and best practices across the organization. The policy’s design intends to shape the way GitHub develops and deploys its AI tools, aligning technological advancement with ethical considerations.