We encourage you to use your business email address while applying to be considered. Applications submitted with personal email addresses are prone to decline (Example:@yahoo.com @yahoo.com, @hotmail.com, etc.).
Fill out separate applications for each use case you might have for your product.
Follow the instructions in theform and enter your sponsorship subscription GUID.
Which Azure support option is right for my question?
For advice on how to build, migrate, or optimize, contact our Azure Advisory team by logging into Founders Hub --> Develop --> "Pair with an Azure Engineer"
To ask your own questions and see answers to preexisting questions please visit the Microsoft Q&A Forum
To build your skills and gain experience with different Azure tools and resources, please visit Azure Learn for Startups
Azure Advisory and Support
How do we get help migrating to Azure from another platform?
Once you reach the "Develop" Level in Founders Hub, you have unlimited access to pairing sessions with Azure Engineers who provide guidance as you build. This can be anything from a quick 30-min discussion on choosing the right products to a 4-hour architectural review or infrastructure optimization consultation.
Book your first session by logging into Founders Hub and navigating to the Develop tab.
What types of Azure Advisory services are available?
Startups in the Develop, Grow and Scale stages have unlimited, complimentary access to a range of 1:1 Azure advisory services including choosing products as well as expert advice as you architect and optimize. To request your first session, log into Founders Hub --> Develop --> "Pair with an Azure Engineer".
What is the difference between OpenAI and Azure OpenAI Services (Azure OAIS)?
OpenAI is a third party benefit and not a Microsoft product. Startups have access to $2,500 in OpenAI credits through Founders Hub. Azure OpenAI Services is a Microsoft Azure product which can be paid for with Azure credits. Many startups use OpenAI for testing and transition to Azure OpenAI Services for production and customer-facing deployments.
My Azure OpenAI Services application was declined. What should I do?
We expanded access to this service in May 2023. If you were declined prior to this, please apply again. If you were declined after this, reach out to firstname.lastname@example.org.
How do I request an increase in quota limits for Azure OpenAI Service?
Companies must be registered businesses and pass domain verification to receive quota limit increases. Information about the limits and process for requesting increases can be found here.
What are the quota limits for Azure OpenAI Services?
See this document for information on default limits
How do rate limits (TPM and RPM) work in Azure OpenAI?
The rate limits in Azure Open AI are defined by TPM (Tokens-Per-Minute) and RPM (Requests-Per-Minute). The TPM quota determines the maximum number of tokens that can be processed by a model deployment per minute. The RPM is now intrinsically linked to TPM, indicating that the two metrics are no longer considered separately. The relationship between TPM and RPM is 6 RPM per 1K TPM for quota. This means that for every 1,000 TPM, you have a maximum of 6 RPM available.
If your requests exceed the allowed RPM, you may encounter a 429 throttling error, indicating that the system is limiting the rate of requests. To handle such scenarios, it is recommended to implement strategies like exponential backoff and retry mechanisms to manage the request rate and ensure smoother system performance.
What is the difference between TPM and context length of a model in Azure OpenAI?
The Tokens-Per-Minute (TPM) refers to the rate limit assigned to deployments in Azure Open AI. It represents the maximum number of tokens that can be processed by a model deployment per minute. On the other hand, the context length of a model refers to the maximum number of tokens that can be included in the input text for a single API call. The TPM rate limit is enforced based on the assigned TPM value, while the context length determines the maximum size of the input text that can be processed by the model.
It's important to note that the TPM allocation is not related to the max input token limit of a model.
Will my data be used internally for training or fine tuning the Azure OpenAI models?
Azure OpenAI doesn't use customer data to retrain models.
Your prompts (inputs) and completions (outputs), your embeddings, and your training data:
are NOT available to other customers
are NOT available to OpenAI
are NOT used to improve OpenAI models
are NOT used to improve any Microsoft or 3rd party products or services
are NOT used for automatically improving Azure OpenAI models for your use in your resource (The models are stateless, unless you explicitly fine-tune models with your training data)
Your fine-tuned Azure OpenAI models are available exclusively for your use.
What is the significance of content filtering for OpenAI models, and how do I customize it per use case?