Contents Menu Expand Light mode Dark mode Auto light/dark mode
Light Logo Dark Logo
Aiven.io GitHub
Log in Get started for free
Light Logo Dark Logo
Get started
Light Logo Dark Logo
  • Get started
    • Free plans
    • 30-day trials
    • Set up marketplace subscriptions
      • Set up AWS Marketplace subscriptions
      • Set up Azure Marketplace subscriptions
      • Set up Google Cloud Marketplace subscriptions
    • Aiven Console overview
    • Service and feature releases
    • Feature previews
  • Organizations, units, and projects
    • Organization hierarchy and access
    • Create organizations and organizational units
    • Manage organizations
    • Manage projects
    • Project member roles
    • Manage unassigned projects
  • Billing and payments
    • Billing overview
    • Corporate billing
    • Tax information for Aiven services
    • Update your tax status
    • Payment methods
      • Manage payment cards
      • Marketplace subscriptions
        • Move to AWS Marketplace
        • Move to Azure Marketplace
        • Move to Google Cloud Marketplace
    • Billing groups
      • Billing groups overview
      • Create billing groups
      • Manage billing groups
      • Assign projects to billing groups
      • Change billing contacts
    • Payment problems when upgrading
    • Request service custom plans
  • User and access management
    • Invite and remove organization users
    • Managed users
    • Manage domains
    • Delete user account
    • Make users super admin
    • User profiles
      • Edit your user profile
      • Change your email address
    • Authentication methods
      • Add authentication methods
      • Password policy
      • Manage two-factor authentication
      • Set authentication policies
      • Authentication tokens
      • Create authentication tokens
      • SAML authentication
        • Set up SAML authentication
        • Set up SAML with Auth0
        • Set up SAML with Microsoft Azure Active Directory
        • Set up SAML with FusionAuth
        • Set up SAML with JumpCloud
        • Set up SAML with Okta
        • Set up SAML with OneLogin
        • Set up SAML with Google
    • Groups
      • Create and manage groups
      • Add groups to projects
      • Create and manage teams
    • Manage project and service notifications
    • Reactivate suspended projects
  • Service management
    • Choosing a time series database
    • Create a new service
    • Rename a service
    • Tag your Aiven resources
    • Search for services
    • Recover a deleted service
    • Create service users
    • Service forking
    • Fork your service
    • Backups at Aiven
    • Service power cycle
    • Pause or terminate your service
    • Service resources
    • Service memory limits
    • Out of memory conditions
    • Prepare services for high load
    • Scale your service
    • Dynamic Disk Sizing (DDS)
    • Disk autoscaling
    • Periodic cleanup of powered-off services
    • Add or remove storage
    • Access service logs
    • Access service metrics
    • Migrate service to another cloud or region
    • Migrate a public service to a Virtual Private Cloud (VPC)
    • End of life for major versions of Aiven services and tools
    • Service maintenance
  • Networking and security
    • List of available cloud regions
    • Availability zones
    • Bring your own cloud (BYOC)
      • About BYOC
      • Enable BYOC
      • Create custom cloud
      • Attach projects
      • Add customer contacts
      • Rename custom cloud
      • Delete custom cloud
    • Enhanced compliance environments (ECE)
    • Firewall configuration for service nodes
    • Cloud security
    • Disaster recovery testing scenarios
    • TLS/SSL certificates
    • Download CA certificates
    • Restrict network access to services
    • Enable public access in VPCs
    • Static IP addresses
    • Default service IP address and hostname
    • Manage static IP addresses
    • Handle resolution errors of private IP addresses
    • Attach VPCs to an AWS Transit Gateway
    • Manage Virtual Private Cloud (VPC) peering
    • Set up Virtual Private Cloud (VPC) peering on Google Cloud Platform (GCP)
    • Set up Virtual Private Cloud (VPC) peering on AWS
    • Set up Azure virtual network peering
    • Set up network peering between Aiven and UpCloud
    • Use AWS PrivateLink with Aiven services
    • Use Azure Private Link with Aiven services
    • Use Google Private Service Connect with Aiven services
  • Monitoring and logs
    • Monitoring services
    • About logging, metrics and alerting
    • Streaming anomaly detection
    • Amazon CloudWatch
      • CloudWatch metrics
      • CloudWatch Logs
        • Send logs to AWS CloudWatch from Aiven web console
        • Send logs to AWS CloudWatch from Aiven client
    • Datadog
      • Send metrics to Datadog
      • Increase metrics limit setting for Datadog
      • Send logs to Datadog
      • Add custom tags Datadog integration
    • Elasticsearch logs
    • Google Cloud Logging
    • Google BigQuery
    • RSyslog
      • Logtail
      • Loggly
    • Jolokia metrics
    • Prometheus metrics
      • Prometheus system metrics
  • Integrations
    • Service integrations
    • Create service integrations
  • Aiven tools
    • Aiven CLI
      • avn account
        • avn account team
      • avn billing-group
      • avn cloud
      • avn credits
      • avn events
      • avn mirrormaker
      • avn project
      • avn service
        • avn service acl
        • avn service connection-info
        • avn service connection-pool
        • avn service connector
        • avn service database
        • avn service es-acl
        • avn service flink
        • avn service integration
        • avn service m3
        • avn service privatelink
        • avn service schema-registry-acl
        • avn service index
        • avn service tags
        • avn service topic
        • avn service user
      • avn ticket
      • avn user
        • avn user access-token
      • avn vpc
    • Aiven API
    • Aiven Provider for Terraform
      • Get started
      • Data sources
      • Enable debug logging
      • Promote PostgreSQL read-only replica to primary
      • Use PostgreSQL Provider with Aiven Provider
      • Upgrade Aiven Provider
        • Upgrade from v1 to v2
        • Upgrade from v2 to v3
        • Upgrade from v3 to v4
        • Upgrade to OpenSearch® with Terraform
        • Update deprecated resources
      • Virtual network peering
        • Set up AWS virtual network peering
        • Set up Azure virtual network peering
        • Set up Google Cloud Platform virtual network peering
      • Troubleshooting
        • Private access error when using VPC
    • Aiven Operator for Kubernetes
  • Apache Kafka
    • Get started
    • Sample data generator
    • Concepts
      • Upgrade procedure
      • Scaling options
      • Access control lists and permission mapping
      • Schema registry authorization
      • Apache Kafka® REST API
      • Compacted topics
      • Partition segments
      • Authentication types
      • NOT_LEADER_FOR_PARTITION errors
      • Configuration backups for Apache Kafka®
      • Monitoring consumer groups in Aiven for Apache Kafka®
      • Consumer lag predictor
      • Quotas
      • Tiered storage
        • Overview
        • How it works
        • Guarantees
        • Limitations
    • HowTo
      • Code samples
        • Connect with Python
        • Connect with Java
        • Connect with Go
        • Connect with command line
        • Connect with NodeJS
      • Tools
        • Configure properties for Apache Kafka® toolbox
        • Use kcat with Aiven for Apache Kafka®
        • Connect to Apache Kafka® with Conduktor
        • Use Kafdrop Web UI with Aiven for Apache Kafka®
        • Use Provectus® UI for Apache Kafka® with Aiven for Apache Kafka®
        • Use Kpow with Aiven for Apache Kafka®
        • Connect Aiven for Apache Kafka® with Klaw
      • Security
        • Configure Java SSL keystore and truststore to access Apache Kafka
        • Manage users and access control lists
        • Monitor and alert logs for denied ACL
        • Use SASL authentication with Aiven for Apache Kafka®
        • Renew and Acknowledge service user SSL certificates
        • Enable OAUTH2/OIDC authentication
        • Encrypt data using a custom serde
      • Administration tasks
        • Schema registry
          • Use Karapace with Aiven for Apache Kafka®
        • Get the best from Apache Kafka®
        • Manage configurations with Apache Kafka® CLI tools
        • Manage Apache Kafka® parameters
        • View and reset consumer group offsets
        • Configure log cleaner for topic compaction
        • Prevent full disks
        • Set Apache ZooKeeper™ configuration
        • Avoid OutOfMemoryError errors in Aiven for Apache Kafka®
        • Optimizing resource usage
        • Enable consumer lag predictor
        • Manage quotas
      • Integrations
        • Integration of logs into Apache Kafka® topic
        • Use Apache Kafka® Streams with Aiven for Apache Kafka®
        • Use Apache Flink® with Aiven for Apache Kafka®
        • Configure Apache Kafka® metrics sent to Datadog
        • Use ksqlDB with Aiven for Apache Kafka
        • Add client-side Apache Kafka® producer and consumer Datadog metrics
      • Topic/schema management
        • Creating an Apache Kafka® topic
        • Create Apache Kafka® topics automatically
        • Get partition details of an Apache Kafka® topic
        • Use schema registry in Java with Aiven for Apache Kafka®
        • Change data retention period
      • Tiered storage
        • Enable tiered storage
        • Configure tiered storage for topic
        • Tiered storage overview page
    • Reference
      • Advanced parameters
      • Metrics available via Prometheus
    • Apache Kafka Connect
      • Getting started
      • Concepts
        • List of available Apache Kafka® Connect connectors
        • JDBC source connector modes
        • Causes of “connector list not currently available”
      • HowTo
        • Administration tasks
          • Get the best from Apache Kafka® Connect
          • Bring your own Apache Kafka® Connect cluster
          • Enable Apache Kafka® Connect on Aiven for Apache Kafka®
          • Enable Apache Kafka® Connect connectors auto restart on failures
          • Manage Kafka Connect logging level
          • Request a new connector
        • Source connectors
          • PostgreSQL to Kafka
          • PostgreSQL to Kafka with Debezium
          • MySQL to Kafka
          • MySQL to Kafka with Debezium
          • SQL Server to Kafka
          • SQL Server to Kafka with Debezium
          • MongoDB to Kafka
          • Handle PostgreSQL® node replacements when using Debezium for change data capture
          • MongoDB to Kafka with Debezium
          • Cassandra to Kafka
          • MQTT to Kafka
          • Google Pub/Sub to Kafka
          • Google Pub/Sub Lite to Kafka
          • Couchbase to Kafka
        • Sink connectors
          • Kafka to another database with JDBC
          • Configure AWS for an S3 sink connector
          • Kafka to S3 (Aiven)
          • Use AWS IAM assume role credentials provider
          • Kafka to S3 (Confluent)
          • Configure GCP for a Google Cloud Storage sink connector
          • Kafka to GCS
          • Configure GCP for a Google BigQuery sink connector
          • Kafka to Big Query
          • Kafka to OpenSearch
          • Kafka to Elasticsearch
          • Configure Snowflake for a sink connector
          • Kakfa to Snowflake
          • Kafka to HTTP
          • Kafka to MongoDB
          • Kafka to MongoDB (by Lenses)
          • Kafka to InfluxDB
          • Kafka to Redis
          • Kafka to Cassandra
          • Kafka to Couchbase
          • Kafka to Google Pub/Sub
          • Kafka to Google Pub/Sub Lite
          • Kafka to Splunk
          • Kafka to MQTT
      • Reference
        • Advanced parameters
        • AWS S3 sink connector naming and data format
          • S3 sink connector by Aiven naming and data formats
          • S3 sink connector by Confluent naming and data formats
        • Google Cloud Storage sink connector naming and data formats
        • Metrics available via Prometheus
    • Apache Kafka MirrorMaker2
      • Getting started
      • Concepts
        • Disaster recovery and migration
          • Active-Active Setup
          • Active-Passive Setup
        • Topics included in a replication flow
        • MirrorMaker 2 common parameters
      • HowTo
        • Integrate an external Apache Kafka® cluster in Aiven
        • Set up an Apache Kafka® MirrorMaker 2 replication flow
        • Setup Apache Kafka® MirrorMaker 2 monitoring
        • Remove topic prefix when replicating with Apache Kafka® MirrorMaker 2
      • Reference
        • List of advanced parameters
        • Known issues
        • Terminology for Aiven for Apache Kafka® MirrorMaker 2
    • Karapace
      • Getting started with Karapace
      • Concepts
        • Karapace schema registry authorization
        • ACLs definition
        • Apache Kafka® REST proxy authorization
      • HowTo
        • Enable Karapace schema registry and REST APIs
        • Enable Karapace schema registry authorization
        • Enable OAuth2/OIDC support for Apache Kafka® REST proxy
        • Manage Karapace schema registry authorization
        • Manage Apache Kafka® REST proxy authorization
  • Apache Flink
    • Overview
      • Architecture overview
      • Aiven for Apache Flink features
      • Managed service features
      • Plans and pricing
      • Limitations
    • Quickstart
    • Concepts
      • Aiven Flink applications
      • Built-in SQL editor
      • Flink tables
      • Checkpoints
      • Savepoints
      • Event and processing times
      • Watermarks
      • Windows
      • Standard and upsert connectors
      • Settings for Apache Kafka® connectors
    • HowTo
      • Get started
      • Integrate service
        • Data service integrations
        • Integrate with Apache Kafka
        • Integrate with Google BigQuery
      • Aiven for Apache Flink applications
        • Create Apache Flink applications
        • Manage Apache Flink applications
      • Apache Flink tables
        • Manage Apache Flink tables
        • Create Apache Flink tables with data sources
          • Apache Kafka-based Apache Flink table
          • Confluent Avro-based Apache Flink table
          • PostgreSQL-based Apache Flink table
          • OpenSearch-based Apache Flink table
          • PostgreSQL CDC connector-based Apache Flink table
          • Slack-based Apache Flink table
          • DataGen-based Apache Flink table
      • Manage cluster
      • Advanced topics
        • Define OpenSearch® timestamp data in SQL pipeline
    • Reference
      • Advanced parameters
  • Apache Cassandra
    • Overview
    • Quickstart
    • Concepts
      • Tombstones
      • Cross-cluster replication
    • HowTo
      • Get started
      • Connect to service
        • Connect with cqlsh
        • Connect with Python
        • Connect with Go
      • Manage service
        • Manage data with DSBULK
        • Stress test with nosqlbench
        • Migrate to Aiven
      • Manage cluster
      • Cross-cluster replication
        • Enable CCR
        • Manage CCR
        • Disable CCR
    • Reference
      • Advanced parameters
      • Metrics via Prometheus
      • Metrics via Datadog
  • ClickHouse
    • Overview
      • Features overview
      • Architecture overview
      • Plans and pricing
      • Limits and limitations
    • Quickstart
    • Concepts
      • Online analytical processing
      • ClickHouse® as a columnar database
      • Indexing and data processing in ClickHouse®
      • Disaster recovery
      • Strings
      • Federated queries
      • Tiered storage
    • HowTo
      • Get started
        • Load data
        • Secure a service
      • Connect to service
        • Connect with the ClickHouse client
        • Connect with Go
        • Connect with Python
        • Connect with Node.js
        • Connect with PHP
        • Connect with Java
      • Manage service
        • Manage users and roles
        • Manage databases and tables
        • Query databases
        • Create materialized views
        • Monitor performance
        • Read and write data across shards
        • Copy data across ClickHouse servers
        • Fetch query statistics
        • Run federated queries
      • Manage cluster
      • Integrate service
        • Connect to Grafana
        • Connect to Apache Kafka
        • Connect to PostgreSQL
        • Connect a service as a data source (Apache Kafka and PostgreSQL)
        • Connect services via integration databases
        • Connect to external DBs with JDBC
      • Tiered storage
        • Enable tiered storage
        • Configure tiered storage
        • Check tiered storage status
        • Transfer data in tiered storage
    • Reference
      • Table engines
      • Interfaces and drivers
      • Metrics in Grafana
      • Metrics via Datadog
      • Metrics via Prometheus
      • Table functions
      • S3 file formats
      • Formats for ClickHouse-Kafka data exchange
      • Advanced parameters
  • Dragonfly
    • Overview
    • Quickstart
    • Concepts
      • High availability in Aiven for Dragonfly®
    • HowTo
      • Connect to service
        • Connect with redis-cli
        • Connect with Go
        • Connect with NodeJS
        • Connect with Python
      • Data migration
        • Migrate Aiven for Redis
        • Migrate external Redis
  • Grafana
    • Overview
      • Features overview
      • Plans and pricing
    • Quickstart
    • HowTo
      • User access
        • Log in to Aiven for Grafana
        • Update Grafana® service credentials
        • OAuth configuration
      • Manage dashboards
        • Dashboard previews
        • Replace strings in Grafana® dashboards
      • Alerts and notifcations
      • Manage cluster
      • Point-in-time recovery process
    • Reference
      • Advanced parameters
      • Plugins
  • InfluxDB
    • Get started
    • Concepts
      • Continuous queries
      • InfluxDB® retention policies
    • HowTo
      • Migrate data from self-hosted InfluxDB® to Aiven
    • Reference
      • Advanced parameters for Aiven for InfluxDB®
  • M3DB
    • Get started
    • Concepts
      • Aiven for M3 components
      • About M3DB namespaces and aggregation
      • About scaling M3
    • HowTo
      • Visualize M3DB data with Grafana®
      • Monitor Aiven services with M3DB
      • Use M3DB as remote storage for Prometheus
      • Write to M3 from Telegraf
      • Telegraf to M3 to Grafana® Example
      • Write data to M3DB with Go
      • Write data to M3DB with PHP
      • Write data to M3DB with Python
    • Reference
      • Terminology
      • Advanced parameters
      • Advanced parameters M3Aggregator
  • MySQL
    • Overview
    • Quickstart
    • Concepts
      • max_connections
      • Backups
      • Memory usage
      • Replication
      • Tuning for concurrency
    • HowTo
      • Get started
      • Connect to a service
        • Connect to Aiven for MySQL® from the command line
        • Connect to Aiven for MySQL® with Python
        • Connect to Aiven for MySQL® using MySQLx with Python
        • Connect to Aiven for MySQL® with Java
        • Connect to Aiven for MySQL® with PHP
        • Connect to Aiven for MySQL® with MySQL Workbench
      • Database management
        • Create a database
        • Create remote replicas
        • Backup and restore with mysqldump
        • Disable foreign key checks
        • Enable slow query logging
        • Create new tables without primary keys
        • Create missing primary keys
        • Detect and terminate long-running queries
      • Data migration
        • Run pre-migration checks
        • Migrate to Aiven with CLI
        • Migrate to Aiven via console
      • Disk space management
        • Prevent running out of disk space
        • Reclaim disk space
        • Identify disk usage issues
      • Cluster management
    • Reference
      • Advanced parameters
      • Resource capability per plan
  • OpenSearch
    • Quickstart
      • Sample dataset: recipes
    • Overview
      • Service overview
      • Plans and pricing
    • Concepts
      • Access control
        • Understanding access control in Aiven for OpenSearch®
      • OpenSearch Security
        • Key considerations
      • OpenSearch backups
      • Indices
      • Aggregations
      • High availability in Aiven for OpenSearch®
      • OpenSearch® vs Elasticsearch
      • Optimal number of shards
      • When to create a new index
      • OpenSearch® cross-cluster replication
    • HowTo
      • Manage access control
        • Manage users and access control in Aiven for OpenSearch®
      • Connect with service
        • Connect with cURL
        • Connect with NodeJS
        • Connect with Python
      • Data management
        • Copy data from OpenSearch to Aiven for OpenSearch® using elasticsearch-dump
        • Copy data from Aiven for OpenSearch® to AWS S3 using elasticsearch-dump
        • Migrate Elasticsearch data
      • Search and aggregation
        • Search with Python
        • Search with NodeJS
        • Aggregation with NodeJS
      • Manage OpenSearch Security
        • Enable OpenSearch Security management
        • SAML authentication
        • OpenID Connect
        • Audit logs
        • OpenSearch Dashboard multi-tenancy
      • Manage service
        • Restore an OpenSearch® backup
        • Set index retention patterns
        • Create alerts with OpenSearch® API
        • Handle low disk space
        • Manage large shards
        • Cross-cluster replication
      • Integrate service
        • Manage OpenSearch® log integration
        • Integrate with Grafana®
      • Upgrade Elasticsearch clients
    • OpenSearch Dashboards
      • Getting started
      • HowTo
        • Getting started with Dev tools
        • Create alerts with OpenSearch® Dashboards
    • Reference
      • Plugins
      • Advanced parameters
      • Automatic adjustment of replication factors
      • REST API endpoint access
      • Low disk space watermarks
    • Troubleshooting
      • Troubleshoot OpenSearch® Dashboards
  • PostgreSQL
    • Overview
    • Quickstart
    • Concepts
      • aiven-db-migrate
      • DBA-type tasks
      • High availability
      • Backups
      • Connection pooling
      • Disk usage
      • Shared buffers
      • TimescaleDB
      • Upgrade and failover procedures
      • AI-powered search with pgvector
    • HowTo
      • Get started
        • Load sample dataset
      • Connect to service
        • Connect with Go
        • Connect with Java
        • Connect with NodeJS
        • Connect with PHP
        • Connect with Python
        • Connect with psql
        • Connect with pgAdmin
        • Connect with Rivery
        • Connect with Skyvia
        • Connect with Zapier
      • Administer database
        • Create additional PostgreSQL® databases
        • Perform a PostgreSQL® major version upgrade
        • Install or update an extension
        • Create manual PostgreSQL® backups
        • Restore PostgreSQL® from a backup
        • Claim public schema ownership
        • Manage connection pooling
        • Access PgBouncer statistics
        • Use the PostgreSQL® dblink extension
        • Use the PostgreSQL® pg_repack extension
        • Use the PostgreSQL® pg_cron extension
        • Enable JIT in PostgreSQL®
        • Identify PostgreSQL® slow queries
        • Detect and terminate long-running queries
        • Optimize PostgreSQL® slow queries
        • Check and avoid transaction ID wraparound
        • Prevent PostgreSQL® full disk issues
        • Enable and use pgvector
        • Check size of a database, a table or an index
        • Restrict access to databases or tables in Aiven for PostgreSQL®
      • Migrate
        • Migrate to a different cloud provider or region
        • Migrate PostgreSQL databases to Aiven via console
        • Migrate to Aiven for PostgreSQL® with aiven-db-migrate
        • Migrate to Aiven for PostgreSQL® with pg_dump and pg_restore
        • Migrating to Aiven for PostgreSQL® using Bucardo
        • Migrate between PostgreSQL® instances using aiven-db-migrate in Python
      • Replicate
        • Create and use read-only replicas
        • Set up logical replication to Aiven for PostgreSQL®
        • Enable logical replication on Amazon Aurora PostgreSQL®
        • Enable logical replication on Amazon RDS PostgreSQL®
        • Enable logical replication on Google Cloud SQL
      • Manage cluster
      • Integrate
        • Database monitoring with Datadog
        • Visualize PostgreSQL® data with Grafana®
        • Monitor PostgreSQL® metrics with Grafana®
        • Monitor PostgreSQL® metrics with pgwatch2
        • Connect two PostgreSQL® services via datasource integration
        • Report and analyze with Google Looker Studio
    • Troubleshooting
      • Connection pooling
      • Repair index
    • Reference
      • Advanced parameters
      • Connection limits per plan
      • Deprecated TLS versions
      • Extensions
      • Keep-alive connections parameters
      • Metrics exposed to Grafana
      • Resource capability per plan
      • Supported log formats
      • Terminology
  • Redis
    • Overview
    • Quickstart
    • Concepts
      • High availablilty
      • Lua scripts
      • Memory management and persistence
      • Restrict Redis commands
    • HowTo
      • Connect to service
        • Connect with redis-cli
        • Connect with Go
        • Connect with NodeJS
        • Connect with PHP
        • Connect with Python
        • Connect with Java
      • Administer database
        • Configure ACL permissions in Aiven for Redis®*
        • Data migration
          • Migrate from Redis®* to Aiven for Redis®* using the CLI
          • Migrate from Redis®* to Aiven for Redis®* using Aiven Console
      • Estimate maximum number of connection
      • Manage SSL connectivity
      • Handle warning overcommit_memory
      • Benchmark performance
    • Reference
      • Advanced parameters
    • Troubleshooting
      • Troubleshoot connection issues
  • Support
Get started for free Log in GitHub Aiven.io
Back to top

avn service flink#

Here you’ll find the full list of commands for avn service flink.

Warning

The Aiven for Apache Flink® CLI commands have been updated, and to execute them, you must use aiven-client version 2.18.0.

Manage Aiven for Apache Flink® applications#

avn service flink create-application#

Create a new Aiven for the Apache Flink® application in the specified service and project.

Parameter

Information

project

The name of the project

service_name

The name of the service

application_properties

Application properties definition for Aiven for Flink application, either as a JSON string or a file path (prefixed with ‘@’) containing the JSON configuration

The application_properties parameter should contain the following common properties in JSON format:

Parameter

Information

name

The name of the application

application_version

(Optional)The version of the application

Example: Creates an Aiven for Apache Flink application named DemoApp in the service flink-democli and project my-project.

avn service flink create-application flink-democli  \
  --project my-project                              \
  "{\"name\":\"DemoApp\"}"

An example of avn service flink create-application output:

{
  "application_versions": [],
  "created_at": "2023-02-08T07:37:25.165996Z",
  "created_by": "wilma@example.com",
  "id": "2b29f4aa-a496-4fca-8575-23544415606e",
  "name": "DemoApp",
  "updated_at": "2023-02-08T07:37:25.165996Z",
  "updated_by": "wilma@example.com"
}

avn service flink list-applications#

Lists all the Aiven for Apache Flink® applications in a specified project and service.

Parameter

Information

project

The name of the project

service_name

The name of the service

Example: Lists all the Aiven for Flink applications for the service flink-democli in the project my-project.

avn service flink list-applications flink-democli \
  --project my-project

An example of avn service flink list-applications output:

{
  "applications": [
      {
          "created_at": "2023-02-08T07:37:25.165996Z",
          "created_by": "wilma@example.com",
          "id": "2b29f4aa-a496-4fca-8575-23544415606e",
          "name": "DemoApp",
          "updated_at": "2023-02-08T07:37:25.165996Z",
          "updated_by": "wilma@example.com"
      }
  ]
}

avn service flink get-application#

Retrieves the information about the Aiven for Flink® applications in a specified project and service.

Parameter

Information

project

The name of the project

service_name

The name of the service

application-id

The ID of the Aiven for Flink application to retrieve information about.

Example: Retrieves information about Aiven for Flink® application with application-id 2b29f4aa-a496-4fca-8575-23544415606e for service flink-democli and project my-project

avn service flink get-application flink-democli \
  --project my-project                          \
  --application-id 2b29f4aa-a496-4fca-8575-23544415606e

An example of avn service flink list-applications output:

{
    "application_versions": [],
    "created_at": "2023-02-08T07:37:25.165996Z",
    "created_by": "wilma@example.com",
    "id": "2b29f4aa-a496-4fca-8575-23544415606e",
    "name": "DemoApp",
    "updated_at": "2023-02-08T07:37:25.165996Z",
    "updated_by": "wilma@example.com"
}

avn service flink update-application#

Update an Aiven for Flink® application in a specified project and service.

Parameter

Information

project

The name of the project

service_name

The name of the service

application-id

The ID of the Aiven for Flink application to update

application-properties

Application properties definition for Aiven for Flink® application, either as a JSON string or a file path (prefixed with ‘@’) containing the JSON configuration

The application_properties parameter should contain the following common properties in JSON format

Parameter

Information

name

The name of the application

Example: Updates the name of the Aiven for Flink application from Demo to DemoApp for application-id 986b2d5f-7eda-480c-bcb3-0f903a866222 in the service flink-democli and project my-project.

avn  service flink update-application flink-democli     \
  --project my-project                                  \
  --application-id 986b2d5f-7eda-480c-bcb3-0f903a866222 \
  "{\"name\":\"DemoApp\"}"

avn  service flink delete-application#

Delete an Aiven for Flink® application in a specified project and service.

Parameter

Information

project

The name of the project

service_name

The name of the service

application-id

The ID of the Aiven for Flink application to delete

Example: Deletes the Aiven for Flink application with application-id 64192db8-d073-4e28-956b-82c71b016e3e for the service flink-democli in the project my-project.

avn  service flink delete-application flink-democli \
  --project my-project                              \
  --application-id 64192db8-d073-4e28-956b-82c71b016e3e

avn service flink create-application-version#

Create an Aiven for Flink® application version in a specified project and service.

Warning

Before creating an application, you need to create integrations between Aiven for Apache Flink and the source/sinks data services. As of now you can define integration with:

  • Aiven for Apache Kafka® as source/sink

  • Aiven for Apache PostgreSQL® as source/sink

  • Aiven for OpenSearch® as sink

Sinking data using the Slack connector, doesn’t need an integration.

Example: to create an integration between an Aiven for Apache Flink service named flink-democli and an Aiven for Apache Kafka service named demo-kafka you can use the following command:

avn service integration-create    \
  --integration-type flink        \
  --dest-service flink-democli    \
  --source-service demo-kafka

All the available command integration options can be found in the dedicated document

Parameter

Information

project

The name of the project

service_name

The name of the service

application-id

The ID of the Aiven for Flink application to create a version

application_version_properties

Application version properties definition for Aiven for Flink® application, either as a JSON string or a file path (prefixed with ‘@’) containing the JSON configuration

The application_version_properties parameter should contain the following common properties in JSON format:

Parameter

Information

sinks

An array of objects that contains the table creation statements creation statements of the sinks

create_table

A string that defines the CREATE TABLE statement of the sink including the integration ID. The integration ID can be found with the integration-list command

source

An array of objects that contains the table creation statements of the source

create_table

A string that defines the CREATE TABLE statement of the source including the integration ID. The integration ID can be found with the integration-list command

statement

The transformation SQL statement of the application

Example: Creates a new Aiven for Flink application version for application-id 986b2d5f-7eda-480c-bcb3-0f903a866222 with the following details:

  • Source: a table, named special_orders coming from an Apache Kafka® topic named special_orders_topic using the integration with id 4ec23427-9e9f-4827-90fa-ea9e38c31bc3 and the following columns:

    id INT,
    name VARCHAR,
    topping VARCHAR
    
  • Sink: a table, called pizza_orders, writing to an Apache Kafka® topic named pizza_orders_topic using the integration with id 4ec23427-9e9f-4827-90fa-ea9e38c31bc3 and the following columns:

    id INT,
    name VARCHAR,
    topping VARCHAR
    
  • SQL statement:

    INSERT INTO special_orders
    SELECT id,
      name,
      c.topping
    FROM pizza_orders
      CROSS JOIN UNNEST(pizzas) b
      CROSS JOIN UNNEST(b.additionalToppings) AS c(topping)
    WHERE c.topping IN ('🍍 pineapple', '🍓 strawberry','🍌 banana')
    
avn service flink create-application-version flink-democli        \
  --project my-project                                            \
  --application-id 986b2d5f-7eda-480c-bcb3-0f903a866222           \
  """{
    \"sources\": [
      {
        \"create_table\":
          \"CREATE TABLE special_orders (                         \
              id INT,                                             \
              name VARCHAR,                                       \
              topping VARCHAR                                     \
              )                                                   \
            WITH (                                                \
              'connector' = 'kafka',                              \
              'properties.bootstrap.servers' = '',                \
              'scan.startup.mode' = 'earliest-offset',            \
              'value.fields-include' = 'ALL',                     \
              'topic' = 'special_orders_topic',                   \
              'value.format' = 'json'                             \
            )\",
            \"integration_id\": \"4ec23427-9e9f-4827-90fa-ea9e38c31bc3\"
      } ],
    \"sinks\": [
      {
        \"create_table\":
          \"CREATE TABLE pizza_orders (                                                   \
              id INT,                                                                     \
              shop VARCHAR,                                                               \
              name VARCHAR,                                                               \
              phoneNumber VARCHAR,                                                        \
              address VARCHAR,                                                            \
              pizzas ARRAY<ROW(pizzaName VARCHAR, additionalToppings ARRAY <VARCHAR>)>)   \
            WITH (                                                                        \
              'connector' = 'kafka',                                                      \
              'properties.bootstrap.servers' = '',                                        \
              'scan.startup.mode' = 'earliest-offset',                                    \
              'topic' = 'pizza_orders_topic',                                             \
              'value.format' = 'json'                                                     \
            )\",
            \"integration_id\": \"4ec23427-9e9f-4827-90fa-ea9e38c31bc3\"
        }
        ],
    \"statement\":
      \"INSERT INTO special_orders                                        \
        SELECT id,                                                        \
          name,                                                           \
          c.topping                                                       \
        FROM pizza_orders                                                 \
          CROSS JOIN UNNEST(pizzas) b                                     \
          CROSS JOIN UNNEST(b.additionalToppings) AS c(topping)           \
        WHERE c.topping IN ('🍍 pineapple', '🍓 strawberry','🍌 banana')\"
  }"""

avn service flink validate-application-version#

Validates the Aiven for Flink® application version in a specified project and service.

Warning

Before creating an application, you need to create integrations between Aiven for Apache Flink and the source/sinks data services. As of now you can define integration with:

  • Aiven for Apache Kafka® as source/sink

  • Aiven for Apache PostgreSQL® as source/sink

  • Aiven for OpenSearch® as sink

Sinking data using the Slack connector, doesn’t need an integration.

Example: to create an integration between an Aiven for Apache Flink service named flink-democli and an Aiven for Apache Kafka service named demo-kafka you can use the following command:

avn service integration-create    \
  --integration-type flink        \
  --dest-service flink-democli    \
  --source-service demo-kafka

All the available command integration options can be found in the dedicated document

Parameter

Information

project

The name of the project

service_name

The name of the service

application-id

The ID of the Aiven for Flink application to create a version

application_version_properties

Application version properties definition for Aiven for Flink application, either as a JSON string or a file path (prefixed with ‘@’) containing the JSON configuration

The application_version_properties parameter should contain the following common properties in JSON format

Parameter

Information

sinks

An array of objects that contains the table creation statements creation statements of the sinks

create_table

A string that defines the CREATE TABLE statement of the sink including the integration ID. The integration ID can be found with the integration-list command

source

An array of objects that contains the table creation statements of the source

create_table

A string that defines the CREATE TABLE statement of the source including the integration ID. The integration ID can be found with the integration-list command

statement

The transformation SQL statement of the application

Example: Validates the Aiven for Flink application version for the application-id 986b2d5f-7eda-480c-bcb3-0f903a866222.

avn service flink validate-application-version flink-democli        \
  --project my-project                                            \
  --application-id 986b2d5f-7eda-480c-bcb3-0f903a866222           \
  """{
    \"sources\": [
      {
        \"create_table\":
          \"CREATE TABLE special_orders (                         \
              id INT,                                             \
              name VARCHAR,                                       \
              topping VARCHAR                                     \
              )                                                   \
            WITH (                                                \
              'connector' = 'kafka',                              \
              'properties.bootstrap.servers' = '',                \
              'scan.startup.mode' = 'earliest-offset',            \
              'value.fields-include' = 'ALL',                     \
              'topic' = 'special_orders_topic',                   \
              'value.format' = 'json'                             \
            )\",
            \"integration_id\": \"4ec23427-9e9f-4827-90fa-ea9e38c31bc3\"
      } ],
    \"sinks\": [
      {
        \"create_table\":
          \"CREATE TABLE pizza_orders (                                                   \
              id INT,                                                                     \
              shop VARCHAR,                                                               \
              name VARCHAR,                                                               \
              phoneNumber VARCHAR,                                                        \
              address VARCHAR,                                                            \
              pizzas ARRAY<ROW(pizzaName VARCHAR, additionalToppings ARRAY <VARCHAR>)>)   \
            WITH (                                                                        \
              'connector' = 'kafka',                                                      \
              'properties.bootstrap.servers' = '',                                        \
              'scan.startup.mode' = 'earliest-offset',                                    \
              'topic' = 'pizza_orders_topic',                                             \
              'value.format' = 'json'                                                     \
            )\",
            \"integration_id\": \"4ec23427-9e9f-4827-90fa-ea9e38c31bc3\"
        }
        ],
    \"statement\":
      \"INSERT INTO special_orders                                        \
        SELECT id,                                                        \
          name,                                                           \
          c.topping                                                       \
        FROM pizza_orders                                                 \
          CROSS JOIN UNNEST(pizzas) b                                     \
          CROSS JOIN UNNEST(b.additionalToppings) AS c(topping)           \
        WHERE c.topping IN ('🍍 pineapple', '🍓 strawberry','🍌 banana')\"
  }"""

avn service flink get-application-version#

Retrieves information about a specific version of an Aiven for Flink® application in a specified project and service.

Parameter

Information

project

The name of the project

service_name

The name of the service

application-id

The ID of the Aiven for Flink application

application-version-id

The ID of the Aiven for Flink application version to retrieve information about

Example: Retrieves the information specific to the Aiven for Flink® application for the service flink-demo-cli and project my-project with:

  • Application id: 986b2d5f-7eda-480c-bcb3-0f903a866222

  • Application version id: 7a1c6266-64da-4f6f-a8b0-75207f997c8d

avn service flink get-application-version flink-democli \
  --project my-project                                  \
  --application-id 986b2d5f-7eda-480c-bcb3-0f903a866222 \
  --application-version-id 7a1c6266-64da-4f6f-a8b0-75207f997c8d

avn service flink delete-application-version#

Deletes a version of the Aiven for Flink® application in a specified project and service.

Parameter

Information

project

The name of the project

service_name

The name of the service

application-id

The ID of the Aiven for Flink application

application-version-id

The ID of the Aiven for Flink application version to delete

Example: Delete the Aiven for Flink application version for service flink-demo-cli and project my-project with:

  • Application id: 986b2d5f-7eda-480c-bcb3-0f903a866222

  • Application version id: 7a1c6266-64da-4f6f-a8b0-75207f997c8d

avn service flink delete-application-version flink-democli  \
  --project my-project                                      \
  --application-id 986b2d5f-7eda-480c-bcb3-0f903a866222     \
  --application-version-id 7a1c6266-64da-4f6f-a8b0-75207f997c8d

avn service flink list-application-deployments#

Lists all the Aiven for Flink® application deployments in a specified project and service.

Parameter

Information

project

The name of the project

service_name

The name of the service

application-id

The ID of the Aiven for Flink application

Example: Lists all the Aiven for Flink application deployments for application-id f171af72-fdf0-442c-947c-7f6a0efa83ad for the service flink-democli, in the project my-project.

avn service flink list-application-deployments flink-democli \
  --project my-project                                       \
  --application-id f171af72-fdf0-442c-947c-7f6a0efa83ad

avn service flink get-application-deployment#

Retrieves information about an Aiven for Flink® application deployment in a specified project and service.

Parameter

Information

project

The name of the project

service_name

The name of the service

application-id

The ID of the Aiven for Flink application

deployment-id

The ID of the Aiven for Flink application deployment. This ID can be obtained from the output of the avn service flink list-application-deployments command

Example: Retrieves the details of the Aiven for Flink application deployment for the application-id f171af72-fdf0-442c-947c-7f6a0efa83ad, deployment-id bee0b5cb-01e7-49e6-bddb-a750caed4229 for the service flink-democli, in the project my-project.

avn service flink get-application-deployment flink-democli \
  --project my-project                                     \
  --application-id f171af72-fdf0-442c-947c-7f6a0efa83ad     \
  --deployment-id bee0b5cb-01e7-49e6-bddb-a750caed4229

avn service flink create-application-deployment#

Creates a new Aiven for Flink® application deployment in a specified project and service.

Parameter

Information

project

The name of the project

service_name

The name of the service

application-id

The ID of the Aiven for Flink application

deployment_properties

The deployment properties definition for Aiven for Flink application, either as a JSON string or a file path (prefixed with ‘@’) containing the JSON configuration

The deployment_properties parameter should contain the following common properties in JSON format

Parameter

Information

parallelism

The number of parallel instance for the task

restart_enabled

Specifies whether a Flink Job is restarted in case it fails

starting_savepoint

(Optional)The the savepoint from where you want to deploy.

version_id

The ID of the application version.

Example: Create a new Aiven for Flink application deployment for the application id 986b2d5f-7eda-480c-bcb3-0f903a866222.

avn service flink create-application-deployment  flink-democli  \
  --project my-project                                          \
  --application-id 986b2d5f-7eda-480c-bcb3-0f903a866222         \
  "{\"parallelism\": 1,\"restart_enabled\": true,  \"version_id\": \"7a1c6266-64da-4f6f-a8b0-75207f997c8d\"}"

avn service flink delete-application-deployment#

Deletes an Aiven for Flink® application deployment in a specified project and service.

Parameter

Information

project

The name of the project

service_name

The name of the service

application-id

The ID of the Aiven for Flink® application

deployment-id

The ID of the Aiven for Flink® application deployment to delete

Example: Deletes the Aiven for Flink application deployment with application-id f171af72-fdf0-442c-947c-7f6a0efa83ad and deployment-id 6d5e2c03-2235-44a5-ab8f-c544a4de04ef.

avn service flink delete-application-deployment flink-democli   \
  --project my-project                                          \
  --application-id f171af72-fdf0-442c-947c-7f6a0efa83ad         \
  --deployment-id 6d5e2c03-2235-44a5-ab8f-c544a4de04ef

avn service flink stop-application-deployment#

Stops a running Aiven for Flink® application deployment in a specified project and service.

Parameter

Information

project

The name of the project

service_name

The name of the service

application-id

The ID of the Aiven for Flink application

deployment-id

The ID of the Aiven for Flink application deployment to stop

Example: Stops the Aiven for Flink application deployment with application-id f171af72-fdf0-442c-947c-7f6a0efa83ad and deployment-id 6d5e2c03-2235-44a5-ab8f-c544a4de04ef.

avn service flink stop-application-deployment flink-democli   \
  --project my-project                                          \
  --application-id f171af72-fdf0-442c-947c-7f6a0efa83ad         \
  --deployment-id 6d5e2c03-2235-44a5-ab8f-c544a4de04ef

avn service flink cancel-application-deployments#

Cancels an Aiven for Flink® application deployment in a specified project and service.

Parameter

Information

project

The name of the project

service_name

The name of the service

application-id

The ID of the Aiven for Flink application

deployment-id

The ID of the Aiven for Flink application deployment to cancel

Example: Cancels the Aiven for Flink application deployment with application-id f171af72-fdf0-442c-947c-7f6a0efa83ad and deployment-id 6d5e2c03-2235-44a5-ab8f-c544a4de04ef.

avn service flink cancel-application-deployments flink-democli   \
  --project my-project                                          \
  --application-id f171af72-fdf0-442c-947c-7f6a0efa83ad         \
  --deployment-id 6d5e2c03-2235-44a5-ab8f-c544a4de04ef
Did you find this useful?

Apache, Apache Kafka, Kafka, Apache Flink, Flink, Apache Cassandra, and Cassandra are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. M3, M3 Aggregator, M3 Coordinator, OpenSearch, PostgreSQL, MySQL, InfluxDB, Grafana, Terraform, and Kubernetes are trademarks and property of their respective owners. *Redis is a registered trademark of Redis Ltd. Any rights therein are reserved to Redis Ltd. Any use by Aiven is for referential purposes only and does not indicate any sponsorship, endorsement or affiliation between Redis and Aiven. All product and service names used in this website are for identification purposes only and do not imply endorsement.

Copyright © 2023, Aiven Team | Show Source | Last updated: January 2024
Contents
  • avn service flink
    • Manage Aiven for Apache Flink® applications
      • avn service flink create-application
      • avn service flink list-applications
      • avn service flink get-application
      • avn service flink update-application
      • avn  service flink delete-application
      • avn service flink create-application-version
      • avn service flink validate-application-version
      • avn service flink get-application-version
      • avn service flink delete-application-version
      • avn service flink list-application-deployments
      • avn service flink get-application-deployment
      • avn service flink create-application-deployment
      • avn service flink delete-application-deployment
      • avn service flink stop-application-deployment
      • avn service flink cancel-application-deployments