JupyterLab
Launch fully managed JupyterLab in seconds. Use the latest web-based interactive development environment for notebooks, code, and data. Its flexible and extensible interface allows you to easily configure machine learning (ML) workflows. Get AI-powered assistance for code generation, troubleshooting, and expert guidance to accelerate your ML development—all within your notebook environment.
Code Editor, based on Code-OSS
Use the lightweight and powerful code editor, and boost productivity with its familiar shortcuts, terminal, debugger, and refactoring tools. Choose from thousands of Visual Studio Code–compatible extensions available in the Open VSX extension gallery to enhance your development experience. Enable versioning control and cross-team collaboration through GitHub repositories. Use the most popular ML frameworks out of the box with the preconfigured Amazon SageMaker distribution. Seamlessly integrate with AWS services through the AWS Toolkit for Visual Studio Code, including built-in access to AWS data sources such as Amazon Simple Storage Service (Amazon S3) and Amazon Redshift, and increase coding efficiency via chat based and inline code suggestions powered by Amazon Q Developer.
RStudio
Access and evaluate FMs
Prepare data at scale
Simplify your data workflows with a unified environment for data engineering, analytics, and ML. Run Spark jobs interactively using Amazon EMR and AWS Glue serverless Spark environments, and monitor them using Spark UI. Use the built-in data preparation capability to visualize data, identify data quality issues, and apply recommended solutions to improve data quality. Automate your data preparation workflows quickly by scheduling your notebook as a job in a few steps. Store, share, and manage ML model features in a central feature store.
Train models quickly with optimized performance
Amazon SageMaker offers high-performing distributed training libraries and built-in tools to optimize model performance. You can automatically tune your models and visualize and correct performance issues before deploying the models to production.
Deploy models for optimal inference performance and cost
Deploy your models with a broad selection of ML infrastructure and deployment options to help meet your ML inference needs. It is fully managed and integrates with MLOps tools, so you can scale your model deployment, reduce inference costs, manage models more effectively in production, and reduce operational burden.
Deliver high-performance production ML models
SageMaker provides purpose-built MLOps and governance tools to help you automate, standardize, and streamline documentation processes across the ML lifecycle. Using SageMaker MLOps tools, you can easily train, test, troubleshoot, deploy, and govern ML models at scale while maintaining model performance in production.
Get generative AI powered assistance
Accelerate your machine learning development velocity with AI assistance powered by Amazon Q Developer on JupyterLab and Code Editor. Leverage its inline code suggestions and chat based assistance to receive 'how-to' guidance, coding support, and troubleshooting steps on demand. Quickly get started and boost your productivity with this powerful tool at your fingertips.