Making Decentrialized Social Easy
Getting started building on Decentralized Social is as easy as deploying a Web2 API.
Build What You Want
Frequency Developer Gateway offers a suite of tools you can pick and choose from to build the best applications for your users.
- Add decentralized authentication and onboarding workflows
- Connect your users with their universal social graph
- Read, write, and interact with social media content
- More coming...
Web2 API Simplicity with Decentralized Power
- Build your applications faster
- Own your infrastructure
- OpenAPI/Swagger out of the box
- Optimized Docker images
Basic Architecture
Frequency Developer Gateway provides a simple API to interact with the Frequency social layers of identity, graph, content, and more.
These microservices are completely independent of one another, so you can use only those pieces you want or need.
Key Microservices
Account Service
The Account Service enables easy interaction with accounts on Frequency.
Accounts are defined as an msaId
(64-bit identifier) and can contain additional information such as a handle, keys, and more.
- Account authentication and creation using SIWF
- Delegation management
- User Handle creation and retrieval
- User key retrieval and management
Graph Service
The Graph Service enables easy interaction with social graphs on Frequency. Each Graph connection on Frequency can be private or public and can be unidirectional (a follow) or bidiectional (double opt-in friend connection).
- Fetch user graph
- Update delegated user graphs
- Watch graphs for external updates
Content Publishing Service
The Content Publishing Service enables the creation of new content-related activity on Frequency.
- Create posts to publicly broadcast
- Create replies to posts
- Create reactions to posts
- Create updates to existing content
- Request deletion of content
- Store and attach media with IPFS
Content Watcher Service
The Content Watcher Service enables client applications to process content found on Frequency by registering for webhook notifications, triggered when relevant content is found, eleminating the need to interact with the chain for new content.
- Parses and validates Frequency content
- Filterable webhooks
- Scanning control
Get Started
Getting Started
In this tutorial, you will setup the Social App Template Example Application that uses Gateway Services. These will all run locally and connect to the public Frequency Testnet. This will give you a quick introduction to a working integration with Gateway Services and a starting place to explore the possibilities.
Expected Time: ~5 minutes
Step 1: Prerequisites
Before you begin, ensure you have the following installed on your machine:
- Git
- Docker
- Node.js
- A Web3 Polkadot wallet (e.g. Polkadot extension)
Step 2: Register on Testnet
To have your application interact on Frequency Testnet, you will need to register as a Provider. This will enable users to delegate to you, and your chain actions to be free via Capacity.
Create an Application Account in a Wallet
- Open a wallet extension such as the Polkadot extension
- Follow account creation steps
- Make sure to keep the seed phrase for the service configuration step
Acquire Testnet Tokens
Visit the Frequency Testnet Faucet and get tokens: Testnet Faucet
Create a Provider
Creating your provider account is easy via the Provider Dashboard.
- Use the same browser with the wallet extension
- Visit the Provider Dashboard
- Select
Become a Provider
- Select the
Testnet Paseo
network - Connect the Application Account created earlier
- Select
Create an MSA
and approve the transaction popups - Choose a public Provider name (e.g. "Cool Test App") and continue via
Create Provider
- Stake for Capacity by selecting
Stake to Provider
and stake 100 XRQCY Tokens
Step 3: Configure and Run the Example
Clone the Example Repository
git clone https://github.com/ProjectLibertyLabs/social-app-template.git
cd social-app-template
Run the Configuration Script
./start.sh
Testnet Setup Help
Use default values when uncertain.
Do you want to start on Frequency Paseo Testnet?
Yes!Enter Provider ID
This is Provider Id from the Provider DashboardEnter Provider Seed Phrase
This is the seed phrase saved from the wallet setupDo you want to change the IPFS settings?
- No if this is just a test run
- Yes, if you want to use an IPFS pinning service
Step 4: Done & What Happened?
You should now be able to access the Social App Template at http://localhost:3000!
What happened in the background?
All the different services needed were started in Docker (Docker Desktop Screenshot):
Step 5: Shutdown
Stop all the Docker services via the script (with the option to remove saved data), or just use Docker Desktop.
./stop.sh
What's Next?
- Open the OpenAPI/Swagger Documentation for each service
- Learn about each service
- Read about Running in Production
Frequency Developer Gateway Guides
Quick Start
Walk through all the steps to get Gateway and the Social Application Template running in 5 minutes.
Single Sign-On
Learn how to use the Frequency Developer Gateway to quickly add Sign In with Frequency, a single sign-on powered by Frequency, to your application.
Become a Provider
Learn how to setup your Provider Account to represent your application on Frequency.
Getting Started
In this tutorial, you will setup the Social App Template Example Application that uses Gateway Services. These will all run locally and connect to the public Frequency Testnet. This will give you a quick introduction to a working integration with Gateway Services and a starting place to explore the possibilities.
Expected Time: ~5 minutes
Step 1: Prerequisites
Before you begin, ensure you have the following installed on your machine:
- Git
- Docker
- Node.js
- A Web3 Polkadot wallet (e.g. Polkadot extension)
Step 2: Register on Testnet
To have your application interact on Frequency Testnet, you will need to register as a Provider. This will enable users to delegate to you, and your chain actions to be free via Capacity.
Create an Application Account in a Wallet
- Open a wallet extension such as the Polkadot extension
- Follow account creation steps
- Make sure to keep the seed phrase for the service configuration step
Acquire Testnet Tokens
Visit the Frequency Testnet Faucet and get tokens: Testnet Faucet
Create a Provider
Creating your provider account is easy via the Provider Dashboard.
- Use the same browser with the wallet extension
- Visit the Provider Dashboard
- Select
Become a Provider
- Select the
Testnet Paseo
network - Connect the Application Account created earlier
- Select
Create an MSA
and approve the transaction popups - Choose a public Provider name (e.g. "Cool Test App") and continue via
Create Provider
- Stake for Capacity by selecting
Stake to Provider
and stake 100 XRQCY Tokens
Step 3: Configure and Run the Example
Clone the Example Repository
git clone https://github.com/ProjectLibertyLabs/social-app-template.git
cd social-app-template
Run the Configuration Script
./start.sh
Testnet Setup Help
Use default values when uncertain.
Do you want to start on Frequency Paseo Testnet?
Yes!Enter Provider ID
This is Provider Id from the Provider DashboardEnter Provider Seed Phrase
This is the seed phrase saved from the wallet setupDo you want to change the IPFS settings?
- No if this is just a test run
- Yes, if you want to use an IPFS pinning service
Step 4: Done & What Happened?
You should now be able to access the Social App Template at http://localhost:3000!
What happened in the background?
All the different services needed were started in Docker (Docker Desktop Screenshot):
Step 5: Shutdown
Stop all the Docker services via the script (with the option to remove saved data), or just use Docker Desktop.
./stop.sh
What's Next?
- Open the OpenAPI/Swagger Documentation for each service
- Learn about each service
- Read about Running in Production
Become a Provider
A Provider is a special kind of user account on Frequency, capable of executing certain operations on behalf of other users (delegators). Any organization wishing to deploy an application that will act on behalf of users must first register as a Provider. This guide will walk you through the steps to becoming a Provider on the Frequency Testnet. See How to Become a Provider on Mainnet, if you are ready to move to production.
Step 1: Generate Your Keys
There are various wallets that can generate and secure Frequency compatible keys, including:
This onboarding process will guide you through the creation of an account and the creation of a Provider Control Key which will be required for many different transactions.
Step 2: Acquire Testnet Tokens
Taking the account generated in Step 1, visit the Frequency Testnet Faucet and get tokens: Testnet Faucet
Step 3: Create a Testnet Provider
Creating your provider account is easy via the Provider Dashboard.
- Visit the Provider Dashboard
- Select
Become a Provider
- Select the
Testnet Paseo
network - Connect the Application Account created earlier
- Select
Create an MSA
and approve the transaction popups - Choose a public Provider name (e.g. "Cool Test App") and continue via
Create Provider
Step 4: Gain Capacity
Capacity is the ability to perform some transactions without token cost. All interactions with the chain that an application does on behalf of a user can be done with Capacity.
In the Provider Dashboard, login and select Stake to Provider
and stake 100 XRQCY Tokens.
Step 5: Done!
You are now registered as a Provider on Testnet and have Capacity to do things like support users with Single Sign On.
You can also use the Provider Dashboard to add additional Control Keys for safety.
Ready to Become a Provider on Mainnet?
Want to make the next step to becoming a Provider on Mainnet?
- Securely generate a Frequency Mainnet Account
- Backup your seed phrase for the account.
- Acquire a small amount of FRQCY tokens.
- Complete the registration with the generated Frequency Mainnet Account via the Provider Dashboard.
The registration process is currently gated to prevent malicious Providers.
Fast Single Sign On with SIWF v2 with Account Service
Overview
Sign In With Frequency (SIWF v2) is quick, user-friendly, decentralized authentication using the Frequency blockchain. Coupled with the Account Service, this provides fast and secure SSO across applications by utilizing cryptographic signatures to verify user identities without complex identity management systems.
Key benefits:
- Decentralized authentication
- Integration with the identity system on Frequency
- Support for multiple credentials (e.g., email, phone)
- Secure and fast user onboarding
Resources:
Setup Tutorial
In this tutorial, you will set up a Sign In With Frequency button for use with Testnet, which will enable you to acquire onboarded, authenticated users with minimal steps.
Prerequisites
Ensure you have:
- Registered your application as a Provider on Frequency Testnet.
- A backend-only-accessable running instance of the Account Service.
- Access to a Frequency RPC Node.
- Public Testnet Node:
wss://0.rpc.testnet.amplica.io
- Public Testnet Node:
Overview
- Application creates a signed request SIWF URL that contains a callback URL.
- User clicks a button that uses the signed request URL.
- User visits the SIWF v2 compatible service (e.g. Frequency Access).
- User is processed by the service.
- User returns with the callback URL.
- Application has the Account Service validate and process registration on Frequency if needed.
- User is authenticated.
Step 1: Generate a SIWF v2 Signed Request
The User will be redirected to a service for generating their signed authentication.
Option A: Static Callback and Permissions
If a static callback and permissions are all that is required, a static Signed Request may be generated and used: Signed Request Generator Tool
Option B: Dynamic Callback or Permissions
A dynamic signed request allows for user-specific callbacks. While this is not needed for most applications, some situations require it.
The Account Service provides an API to generate the Signed Request URL:
curl -X GET "https://account-service.internal/v2/accounts/siwf?callbackUrl=https://app.example.com/callback"
Selecting Permissions for Delegation
Permissions define the actions that you as the Application can perform on behalf of the user. They are based on Schemas published to Frequency.
See list of SIWF v2 Available Delegations.
Requesting Credentials
SIWF v2 supports requesting validated credentials such as a phone number, email, and private graph keys.
See list of SIWF v2 Credentials.
Step 2: Forward the User for Authentication
Redirect the user to the URL obtained from the previous step:
window.location.href = '"https://testnet.frequencyaccess.com/siwa/start?signedRequest=eyJyZXF1ZXN0ZWRTaWduYXR1cmVzIjp7InB1YmxpY0tleSI6eyJlbmNvZGVkVmFsdWUiOiJmNmNMNHdxMUhVTngxMVRjdmRBQk5mOVVOWFhveUg0N21WVXdUNTl0elNGUlc4eURIIiwiZW5jb2RpbmciOiJiYXNlNTgiLCJmb3JtYXQiOiJzczU4IiwidHlwZSI6IlNyMjU1MTkifSwic2lnbmF0dXJlIjp7ImFsZ28iOiJTUjI1NTE5IiwiZW5jb2RpbmciOiJiYXNlMTYiLCJlbmNvZGVkVmFsdWUiOiIweDNlMTdhYzM3Yzk3ZWE3M2E3YzM1ZjBjYTJkZTcxYmY3MmE5NjlkYjhiNjQyYzU3ZTI2N2Q4N2Q1OTA3ZGM4MzVmYTJjODI4MTdlODA2YTQ5NGIyY2E5Y2U5MjJmNDM1NDY4M2U4YzAxMzY5NTNlMGZlNWExODJkMzU0NjQ2Yzg4In0sInBheWxvYWQiOnsiY2FsbGJhY2siOiJodHRwOi8vbG9jYWxob3N0OjMwMDAiLCJwZXJtaXNzaW9ucyI6WzUsNyw4LDksMTBdfX0sInJlcXVlc3RlZENyZWRlbnRpYWxzIjpbeyJ0eXBlIjoiVmVyaWZpZWRHcmFwaEtleUNyZWRlbnRpYWwiLCJoYXNoIjpbImJjaXFtZHZteGQ1NHp2ZTVraWZ5Y2dzZHRvYWhzNWVjZjRoYWwydHMzZWV4a2dvY3ljNW9jYTJ5Il19LHsiYW55T2YiOlt7InR5cGUiOiJWZXJpZmllZEVtYWlsQWRkcmVzc0NyZWRlbnRpYWwiLCJoYXNoIjpbImJjaXFlNHFvY3poZnRpY2k0ZHpmdmZiZWw3Zm80aDRzcjVncmNvM29vdnd5azZ5NHluZjQ0dHNpIl19LHsidHlwZSI6IlZlcmlmaWVkUGhvbmVOdW1iZXJDcmVkZW50aWFsIiwiaGFzaCI6WyJiY2lxanNwbmJ3cGMzd2p4NGZld2NlazVkYXlzZGpwYmY1eGppbXo1d251NXVqN2UzdnUydXducSJdfV19XX0&mode=dark"';
For mobile applications, use an embedded browser to handle the redirection smoothly with minimal impact on user experience.
Step 3: Handle the Callback
After the user completes authentication, Frequency Access or other SIWF v2 Service will redirect the user to your callbackUrl
with either an authorizationCode
or authorizationPayload
.
The Account Service provides an API to validate and process the SIWF v2 authorization:
curl -X POST "https://account-service.internal/v2/accounts/siwf" \
-H "Content-Type: application/json" \
-d '{
"authorizationCode": "received-code",
}'
The response will include the user's credentials, control key, and more:
{
"controlKey": "f6cL4wq1HUNx11TcvdABNf9UNXXoyH47mVUwT59tzSFRW8yDH",
"msaId": "314159265358979323846264338",
"email": "user@example.com",
"phoneNumber": "555-867-5309",
"graphKey": "555-867-5309",
"rawCredentials": [
{
"@context": [
"https://www.w3.org/ns/credentials/v2",
"https://www.w3.org/ns/credentials/undefined-terms/v2"
],
"type": [
"VerifiedEmailAddressCredential",
"VerifiableCredential"
],
"issuer": "did:web:frequencyaccess.com",
"validFrom": "2024-08-21T21:28:08.289+0000",
"credentialSchema": {
"type": "JsonSchema",
"id": "https://schemas.frequencyaccess.com/VerifiedEmailAddressCredential/bciqe4qoczhftici4dzfvfbel7fo4h4sr5grco3oovwyk6y4ynf44tsi.json"
},
"credentialSubject": {
"id": "did:key:z6QNucQV4AF1XMQV4kngbmnBHwYa6mVswPEGrkFrUayhttT1",
"emailAddress": "john.doe@example.com",
"lastVerified": "2024-08-21T21:27:59.309+0000"
},
"proof": {
"type": "DataIntegrityProof",
"verificationMethod": "did:web:frequencyaccess.com#z6MkofWExWkUvTZeXb9TmLta5mBT6Qtj58es5Fqg1L5BCWQD",
"cryptosuite": "eddsa-rdfc-2022",
"proofPurpose": "assertionMethod",
"proofValue": "z4jArnPwuwYxLnbBirLanpkcyBpmQwmyn5f3PdTYnxhpy48qpgvHHav6warjizjvtLMg6j3FK3BqbR2nuyT2UTSWC"
}
},
{
"@context": [
"https://www.w3.org/ns/credentials/v2",
"https://www.w3.org/ns/credentials/undefined-terms/v2"
],
"type": [
"VerifiedGraphKeyCredential",
"VerifiableCredential"
],
"issuer": "did:key:z6QNucQV4AF1XMQV4kngbmnBHwYa6mVswPEGrkFrUayhttT1",
"validFrom": "2024-08-21T21:28:08.289+0000",
"credentialSchema": {
"type": "JsonSchema",
"id": "https://schemas.frequencyaccess.com/VerifiedGraphKeyCredential/bciqmdvmxd54zve5kifycgsdtoahs5ecf4hal2ts3eexkgocyc5oca2y.json"
},
"credentialSubject": {
"id": "did:key:z6QNucQV4AF1XMQV4kngbmnBHwYa6mVswPEGrkFrUayhttT1",
"encodedPublicKeyValue": "0xb5032900293f1c9e5822fd9c120b253cb4a4dfe94c214e688e01f32db9eedf17",
"encodedPrivateKeyValue": "0xd0910c853563723253c4ed105c08614fc8aaaf1b0871375520d72251496e8d87",
"encoding": "base16",
"format": "bare",
"type": "X25519",
"keyType": "dsnp.public-key-key-agreement"
},
"proof": {
"type": "DataIntegrityProof",
"verificationMethod": "did:key:z6MktZ15TNtrJCW2gDLFjtjmxEdhCadNCaDizWABYfneMqhA",
"cryptosuite": "eddsa-rdfc-2022",
"proofPurpose": "assertionMethod",
"proofValue": "z2HHWwtWggZfvGqNUk4S5AAbDGqZRFXjpMYAsXXmEksGxTk4DnnkN3upCiL1mhgwHNLkxY3s8YqNyYnmpuvUke7jF"
}
}
]
}
Step 4: Initiate a User Session
There are two identifiers included with the response.
The controlKey
will always be returned and can be considered unique for the user for this authentication session.
The msaId
is the unique identifier of an account on Frequency, but it may not be available immediately if the user is new to Frequency (See Waiting for an MSA Id below).
At this point the user is authenticated! Your application should initiate a session and follow standard session management practices.
Waiting for an MSA Id
If you want to wait for confirmation that the Account Service has (if needed) created an MSA Id for the user, you may use this pair of APIs to confirm it:
- Get the MSA Id by
controlKey
GET/v1/accounts/account/{accountId}
- Get the delegation by
msaId
andproviderId
GET/v2/delegations/{msaId}/{providerId}
Behind the Scenes
What's happening in each of these systems?
SIWF v2 Service
Connects or provides the user's wallet to sign the needed payloads to prove they are the controller of their account.
Learn more about the SIWF v2 Specification.
Account Service
- Generates a signed SIWF v2 URL using a Provider Control Key.
- Retrieves and valdiates the response from the SIWF v2 Callback URL.
Frequency
Provides the source of truth and unique identifiers for each account so that accounts are secure.
Core Concepts
Global State
Frequency provides a shared global state to make interoperability and user control fundamental to the internet. Applications provide unique experiences to their users while accessing the content and graph connections from other applications. Your application can then interact with this shared global state seamlessly, in the same way that modern networking software allows isolated computers to interact seamlessly over a global network moving past the artificial application boundaries.
User Control with Delegation
Users are at the core of every application and network. While users must maintain ultimate control, delegation to your application gives you the ability to provide seamless experiences for your users and their data.
Learn more about Delegation in Frequency Documentation.
Interoperability Between Apps
Frequency enables seamless interaction and data sharing between different applications built on its platform. This interoperability is facilitated by:
- Standardized Protocols: Frequency uses the Decentralized Social Networking Protocol (DSNP), an open Web3 protocol that ensures compatibility between different applications.
- Common Data Structures: By using standardized data structures for user profiles, messages, and other social interactions, Frequency ensures that data can be easily shared and interpreted across different applications.
- User Control: Users can switch between different applications without losing their social connections or content, ensuring continuity and control over their digital presence.
By leveraging these principles and infrastructures, Frequency provides a robust platform for developing decentralized social applications that are secure, scalable, and user-centric.
Learn More
Blockchain Basics
Overview of Blockchain Principles for Social Applications
Blockchain technology is a decentralized ledger system where data is stored across multiple nodes, ensuring transparency, security, and immutability.
Reading the Blockchain
RPCs, Universal State, Finalized vs Non-Finalized
-
RPCs (Remote Procedure Calls): RPCs are used to interact with the blockchain network. They allow users to query the blockchain state, submit transactions, and perform other operations by sending requests to nodes in the network.
-
Universal State: The blockchain maintains a universal state that is agreed upon by all participating nodes. This state includes all the data and transactions that have been validated and confirmed.
-
Finalized vs Non-Finalized:
- Finalized Transactions: Once a transaction is confirmed and included in a block, it is considered finalized. Finalized transactions are immutable and cannot be changed or reverted.
- Non-Finalized Transactions: Transactions that have been submitted to the network but are not yet included in a block are considered non-finalized. They are pending confirmation and can still be altered or rejected.
Writing Changes to the Blockchain
Transactions
- Transactions are the primary means of updating the blockchain state. They can involve transferring tokens, or executing other predefined operations.
Nonces
- Each transaction includes a nonce, a unique number that prevents replay attacks. The nonce ensures that each transaction is processed only once and in the correct order.
Finalization
- Finalization is the process of confirming and adding a transaction to a block. Once a transaction is included in a block and the block is finalized, the transaction becomes immutable.
Block Time
- Block time refers to the interval at which new blocks are added to the blockchain. It determines the speed at which transactions are confirmed and finalized. Shorter block times lead to faster transaction confirmations but can increase the risk of network instability.
Why Blockchain
- Decentralization: Eliminates the need for a central authority, ensuring that users have control over their data and interactions.
- Transparency: All transactions are recorded on a public ledger, providing visibility into the operations of social platforms.
- Security: Advanced cryptographic methods secure user data and interactions, making it difficult for malicious actors to tamper with information.
- Immutability: Once data is recorded on the blockchain, it cannot be altered, ensuring the integrity of user posts, messages, and other social interactions.
- User Empowerment: Users can own their data and have the ability to move freely between different platforms without losing their social connections or content.
Interoperability Between Frequency Social Apps
Frequency enables seamless interaction and data sharing between different social dapps built on its platform. This interoperability is facilitated by:
- Standardized Protocols: Frequency uses the Decentralized Social Networking Protocol (DSNP), an open Web3 protocol that ensures compatibility between different social dapps.
- Common Data Structures: By using standardized data structures for user profiles, messages, and other social interactions, Frequency ensures that data can be easily shared and interpreted across different applications.
- Interoperable APIs: Frequency provides a set of REST APIs that allow developers to build applications capable of interacting with each other, ensuring a cohesive user experience across the ecosystem.
- User Control: Users can switch between different social dapps without losing their social connections or content, ensuring continuity and control over their digital presence.
By leveraging these principles and infrastructures, Frequency provides a robust platform for developing decentralized social applications that are secure, scalable, and user-centric.
Frequency Networks
Mainnet
The Frequency Mainnet is the primary, production-level network where real transactions and interactions occur. It is fully secure and operational, designed to support live applications and services. Users and developers interact with the Mainnet for all production activities, ensuring that all data and transactions are immutable and transparent.
Key Features:
- High Security: Enhanced security protocols to protect user data and transactions.
- Immutability: Once data is written to the Mainnet, it cannot be altered.
- Decentralization: Fully decentralized network ensuring no single point of control.
- Real Transactions: All transactions on the Mainnet are real and involve actual tokens.
URLs
- Public Mainnet RPC URLs
0.rpc.frequency.xyz
1.rpc.frequency.xyz
- Polkadot.js Block Explorer
Testnet
The Frequency Testnet is a testing environment that mirrors the Mainnet. It allows developers to test their applications and services in a safe environment without risking real tokens. The Testnet is crucial for identifying and fixing issues before deploying to the Mainnet.
Key Features:
- Safe Testing: Enables developers to test applications without real-world consequences.
- Simulated Environment: Mirrors the Mainnet to provide realistic testing conditions.
- No Real Tokens: Uses test tokens instead of real tokens, eliminating financial risk.
- Frequent Updates: Regularly updated to incorporate the latest features and fixes for testing purposes.
URLs
- Testnet RPC URL
0.rpc.testnet.amplica.io
- Polkadot.js Block Explorer
- Testnet Token Faucet
Local
The Local network setup is a private, local instance of the Frequency blockchain that developers can run on their own machines. It is used for development, debugging, and testing in a controlled environment. The Local network setup provides the flexibility to experiment with new features and configurations without affecting the Testnet or Mainnet.
Key Features:
- Local Development: Allows developers to work offline and test changes quickly.
- Customizable: Developers can configure the Local network to suit their specific needs.
- Isolation: Isolated from the Mainnet and Testnet, ensuring that testing does not interfere with live networks.
- Rapid Iteration: Facilitates rapid development and iteration, allowing for quick testing and debugging.
URLs
- Local Node: Typically run on
http://localhost:9933
or a similar local endpoint depending on the setup. - Documentation: Frequency Docs
- GitHub Repository: Frequency GitHub
- Project Website: Frequency Website
Using Polkadot.js Explorer
To interact with the Frequency networks using the Polkadot.js Explorer, follow these steps:
-
Open Polkadot.js Explorer:
- Go to Polkadot.js Explorer.
-
Select Frequency Network:
- Click on the network selection dropdown at the top left corner of the page.
- Choose "POLKADOT & PARACHAINS -> Frequency Polkadot Parachain" for the main network.
- Choose "TEST PASEO & PARACHAINS -> Frequency Paseo Parachain" for the test network.
- For local development, connect to your local node by selecting "DEVELOPMENT -> Local Node or Custom Endpoint" and entering the URL of your local node (e.g.,
http://localhost:9933
).
-
Connect Your Wallet:
- Ensure your Polkadot-supported wallet is connected.
- You will be able to see and interact with your accounts and transactions on the selected Frequency network.
By using these steps, you can easily switch between the different Frequency networks and manage your blockchain activities efficiently.
Frequency Developer Gateway Architecture
Authentication
Gateway and Frequency provide authentication, but not session management. Using cryptographic signatures, you will get proof the user is authenticated without passwords or other complex identity systems to implement. Your application still must manage sessions as is best for your custom needs.
What does it mean for Applications?
- Web2 APIs: Typically use OAuth, API keys, or session tokens for authentication.
- Frequency APIs: Utilize cryptographic signatures for secure authentication, ensuring user identity and data integrity.
Sign In With Frequency (SIWF)
Sign In With Frequency (SIWF) v1 (deprecated) and v2 (which supports Frequency Access) are methods for authenticating users in the Frequency ecosystem. SIWF allows users to authenticate using their Frequency accounts, providing a secure and decentralized way to manage identities.
- SIWF v2 Implementation: Users sign in using an SIWF v2 Authentication Service that uses redirect and callback URLs. The Authentication Service authenticates and generate cryptographic signatures for authentication.
- SIWF v1 Implementation: Users sign in using their Web3 wallets, which generate cryptographic signatures for authentication.
Account Service
The Account Service in Gateway handles user account management, including creating accounts, managing keys, and delegating permissions. This service replaces traditional user models with decentralized identities and provides a robust framework for user authentication and authorization.
Data Storage
What does it mean for Frequency?
- Web2 APIs: Data is stored in centralized databases managed by the service provider.
- Frequency APIs: Data is stored on the decentralized blockchain (metadata) and off-chain storage (payload), ensuring transparency and user control.
IPFS
InterPlanetary File System (IPFS) is a decentralized storage solution used in the Frequency ecosystem to store large data payloads off-chain. IPFS provides a scalable and resilient way to manage data, ensuring that it is accessible and verifiable across the network.
- Usage in Gateway: Content Publishing Service uses IPFS to store user-generated content such as images, videos, and documents. The metadata associated with this content is stored on the blockchain, while the actual files are stored on IPFS, ensuring decentralization and availability.
Blockchain
The Frequency blockchain stores metadata and transaction records, providing a secure and user-controlled data store. This ensures that all interactions are transparent and traceable, enhancing trust in the system.
- Usage in Gateway: Metadata for user actions, such as content publication, follows/unfollows, and other social interactions, are stored on the blockchain. This ensures that all actions are verifiable and under user control.
Local/Application Data
For efficiency and performance, certain data may be stored locally or within application-specific storage systems. This allows for quick access and manipulation of frequently used data while ensuring that critical information remains secure on the blockchain.
Application / Middleware
Hooking Up All the Microservices
Gateway is designed to support a modular and microservices-based approach. Each service (e.g., Account Service, Graph Service, Content Publishing Service) operates independently but can interact through well-defined APIs.
Here is Where Your Custom Code Goes!
Developers can integrate their custom code within this modular framework, extending the functionality of the existing services or creating new services that interact with the Frequency ecosystem.
Standard Services Gateway Uses
Redis
Redis is a key-value store used for caching and fast data retrieval. It is often employed in microservices architectures to manage state and session data efficiently.
- Why Redis: Redis provides low-latency access to frequently used data, making it ideal for applications that require real-time performance.
- Usage in Gateway: Redis can be used to cache frequently accessed data, manage session states, and optimize database queries.
BullMQ
BullMQ is a Node.js library for creating robust job queues with Redis.
- Struture for Redis Queues: BullMQ enhances Redis by providing a reliable and scalable way to manage background jobs and task queues, ensuring that tasks are processed efficiently and reliably.
- Usage in Gateway: BullMQ can be used to handle background processing tasks such as sending notifications, processing user actions, and managing content updates.
IPFS Kubo API
Kubo is an IPFS implementation and standard API designed for high performance and scalability.
- Usage in Gateway: Kubo IPFS is used to manage the storage and retrieval of large files in the Frequency ecosystem, ensuring that data is decentralized and accessible.
Migrating from Web2 to Web3
Step-by-Step Migration Guide
-
Assess Your Current Web2 Application
- Identify core functionalities.
- Analyze data structures.
- Review user authentication.
-
Understand Frequency and Gateway Services
- Learn about Frequency blockchain architecture.
- Understand Gateway services (Account, Graph, Content Publishing, Content Watcher).
-
Set Up Your Development Environment
- Install Docker, Node.js, and a Web3 wallet.
- Clone the Gateway service repositories.
- Set up Docker containers.
-
Configure Gateway Services
- Create and configure
.env
files with necessary environment variables.
- Create and configure
-
Migrate User Authentication
- Integrate Web3 authentication using MetaMask or another Web3 wallet.
- Configure MetaMask to connect to the Frequency TestNet.
-
Migrate Data Storage
- Transition to decentralized storage.
- Use Frequency blockchain for metadata and off-chain storage for payload data.
-
Migrate Core Functionalities
- Use the Content Publishing Service for creating feeds and posting content.
- Use the Graph Service for managing social connections.
- Use the Content Watcher Service for retrieving the latest state of feeds and reactions.
-
Test and Validate
- Perform functional, performance, and security testing.
-
Optimize and Deploy
- Optimize your application for performance on the Frequency blockchain.
- Deploy your migrated application to the production environment.
-
Educate Your Users
- Provide documentation and support for user onboarding.
- Establish a feedback loop to gather user feedback and make improvements.
By following these steps, you can successfully migrate your Web2 application to the Gateway and Frequency Web3 environment.
Services
Account Service
The Account Service enables easy interaction with accounts on Frequency.
Accounts are defined as an msaId
(64-bit identifier) and can contain additional information such as a handle, keys, and more.
- Account authentication and creation using SIWF
- Delegation management
- User Handle creation and retrieval
- User key retrieval and management
See Account Service Details & API Reference
Graph Service
The Graph Service enables easy interaction with social graphs on Frequency. Each Graph connection on Frequency can be private or public and can be unidirectional (a follow) or bidiectional (double opt-in friend connection).
- Fetch user graph
- Update delegated user graphs
- Watch graphs for external updates
See Graph Service Details & API Reference
Content Publishing Service
The Content Publishing Service enables the creation of new content-related activity on Frequency.
- Create posts to publicly broadcast
- Create replies to posts
- Create reactions to posts
- Create updates to existing content
- Request deletion of content
- Store and attach media with IPFS
See Content Publishing Service Details & API Reference
Content Watcher Service
The Content Watcher Service enables client applications to process content found on Frequency by registering for webhook notifications, triggered when relevant content is found, eleminating the need to interact with the chain for new content.
- Parses and validates Frequency content
- Filterable webhooks
- Scanning control
See Content Watcher Service Details & API Reference
Account Service
The Account Service provides functionalities related to user accounts on the Frequency network. It includes endpoints for managing user authentication, account details, delegation, keys, and handles.
API Reference
Configuration
ℹ️ Feel free to adjust your environment variables to taste. This application recognizes the following environment variables:
Name | Description | Range/Type | Required? | Default |
---|---|---|---|---|
API_PORT | HTTP port that the application listens on | 1025 - 65535 | 3000 | |
BLOCKCHAIN_SCAN_INTERVAL_SECONDS | How many seconds to delay between successive scans of the chain for new content (after end of chain is reached) | > 0 | 12 | |
CACHE_KEY_PREFIX | Prefix to use for Redis cache keys | string | Y | |
CAPACITY_LIMIT | Maximum amount of provider capacity this app is allowed to use (per epoch) type: 'percentage' 'amount' value: number (may be percentage, ie '80', or absolute amount of capacity) | JSON (example) | Y | |
SIWF_NODE_RPC_URL | Blockchain node address resolvable from the client browser, used for SIWF | http(s): URL | Y | |
FREQUENCY_API_WS_URL | Blockchain API Websocket URL | ws(s): URL | Y | |
FREQUENCY_TIMEOUT_SECS | Frequency chain connection timeout limit; app will terminate if disconnected longer | integer | 10 | |
HEALTH_CHECK_MAX_RETRIES | Number of /health endpoint failures allowed before marking the provider webhook service down | >= 0 | 20 | |
HEALTH_CHECK_MAX_RETRY_INTERVAL_SECONDS | Number of seconds to retry provider webhook /health endpoint when failing | > 0 | 64 | |
HEALTH_CHECK_SUCCESS_THRESHOLD | Minimum number of consecutive successful calls to the provider webhook /health endpoint before it is marked up again | > 0 | 10 | |
PROVIDER_ACCESS_TOKEN | An optional bearer token authentication to the provider webhook | string | ||
PROVIDER_ACCOUNT_SEED_PHRASE | Seed phrase for provider MSA control key | string | Y | |
PROVIDER_ID | Provider MSA Id | integer | Y | |
REDIS_URL | Connection URL for Redis | URL | Y | |
TRUST_UNFINALIZED_BLOCKS | Whether to examine blocks that have not been finalized when tracking extrinsic completion | boolean | false | |
WEBHOOK_BASE_URL | Base URL for provider webhook endpoints | URL | Y | |
WEBHOOK_FAILURE_THRESHOLD | Number of failures allowed in the provider webhook before the service is marked down | > 0 | 3 | |
WEBHOOK_RETRY_INTERVAL_SECONDS | Number of seconds between provider webhook retry attempts when failing | > 0 | 10 | |
GRAPH_ENVIRONMENT_TYPE | Graph environment type. | Mainnet|TestnetPaseo | Y | |
API_TIMEOUT_MS | Api timeout limit in milliseconds | > 0 | 5000 | |
API_BODY_JSON_LIMIT | Api json body size limit in string (some examples: 100kb or 5mb or etc) | string | 1mb | |
SIWF_URL | SIWF v1: URL for Sign In With Frequency V1 UI | URL | https://ProjectLibertyLabs.github.io/siwf/v1/ui | |
SIWF_V2_URL | SIWF v2: URL for Sign In With Frequency V2 Redirect URL | URL | Frequency Access | |
SIWF_V2_URI_VALIDATION | SIWF v2: Domain (formatted as URI) to validate signin requests (*Required if using Sign In with Frequency v2) | Domain (Examples: https://www.your-app.com, example://login, localhost) | * |
Best Practices
- Secure Authentication: Always use secure methods (e.g., JWT tokens) for authentication to protect user data.
- Validate Inputs: Ensure all input data is validated to prevent injection attacks and other vulnerabilities.
- Rate Limiting: Implement rate limiting to protect the service from abuse and ensure fair usage.
Account Service
API Reference
Open Direct API Reference Page
Path Table
Method | Path | Description |
---|---|---|
GET | /v2/accounts/siwf | Get the Sign In With Frequency Redirect URL |
POST | /v2/accounts/siwf | Process the result of a Sign In With Frequency v2 callback |
GET | /v1/accounts/siwf | Get the Sign In With Frequency configuration |
POST | /v1/accounts/siwf | Request to Sign In With Frequency |
GET | /v1/accounts/{msaId} | Fetch an account given an MSA Id |
GET | /v1/accounts/account/{accountId} | Fetch an account given an Account Id |
GET | /v1/accounts/retireMsa/{accountId} | Get a retireMsa unsigned, encoded extrinsic payload. |
POST | /v1/accounts/retireMsa | Request to retire an MSA ID. |
GET | /v2/delegations/{msaId} | Get all delegation information associated with an MSA Id |
GET | /v2/delegations/{msaId}/{providerId} | Get an MSA's delegation information for a specific provider |
GET | /v1/delegation/{msaId} | Get the delegation information associated with an MSA Id |
GET | /v1/delegation/revokeDelegation/{accountId}/{providerId} | Get a properly encoded RevokeDelegationPayload that can be signed |
POST | /v1/delegation/revokeDelegation | Request to revoke a delegation |
POST | /v1/handles | Request to create a new handle for an account |
POST | /v1/handles/change | Request to change a handle |
GET | /v1/handles/change/{newHandle} | Get a properly encoded ClaimHandlePayload that can be signed. |
GET | /v1/handles/{msaId} | Fetch a handle given an MSA Id |
POST | /v1/keys/add | Add new control keys for an MSA Id |
GET | /v1/keys/{msaId} | Fetch public keys given an MSA Id |
GET | /v1/keys/publicKeyAgreements/getAddKeyPayload | Get a properly encoded StatefulStorageItemizedSignaturePayloadV2 that can be signed. |
POST | /v1/keys/publicKeyAgreements | Request to add a new public Key |
GET | /healthz | Check the health status of the service |
GET | /livez | Check the live status of the service |
GET | /readyz | Check the ready status of the service |
Reference Table
Path Details
[GET]/v2/accounts/siwf
- Summary
Get the Sign In With Frequency Redirect URL
Parameters(Query)
credentials?: string[]
permissions?: string[]
callbackUrl: string
Responses
- 200 SIWF Redirect URL
application/json
{
// The base64url encoded JSON stringified signed request
signedRequest: string
// A publically available Frequency node for SIWF dApps to connect to the correct chain
frequencyRpcUrl: string
// The compiled redirect url with all the parameters already built in
redirectUrl: string
}
[POST]/v2/accounts/siwf
- Summary
Process the result of a Sign In With Frequency v2 callback
RequestBody
- application/json
{
// The code returned from the SIWF v2 Authentication service that can be exchanged for the payload. Required unless an `authorizationPayload` is provided.
authorizationCode?: string
// The SIWF v2 Authentication payload as a JSON stringified and base64url encoded value. Required unless an `authorizationCode` is provided.
authorizationPayload?: string
}
Responses
- 200 Signed in successfully
application/json
{
// The ss58 encoded MSA Control Key of the login.
controlKey: string
// The user's MSA Id, if one is already created. Will be empty if it is still being processed.
msaId?: string
// The users's validated email
email?: string
// The users's validated SMS/Phone Number
phoneNumber?: string
// The users's Private Graph encryption key.
graphKey?: #/components/schemas/GraphKeySubject
rawCredentials: {
}[]
}
[GET]/v1/accounts/siwf
- Summary
Get the Sign In With Frequency configuration
Responses
- 200 Returned SIWF Configuration data
application/json
{
providerId: string
siwfUrl: string
frequencyRpcUrl: string
}
[POST]/v1/accounts/siwf
- Summary
Request to Sign In With Frequency
RequestBody
- application/json
{
// The wallet login request information
signIn?: #/components/schemas/SignInResponseDto
signUp: {
extrinsics: {
pallet: string
extrinsicName: string
// Hex-encoded representation of the extrinsic
encodedExtrinsic: string
}[]
error: {
// Error message
message: string
}
}
}
Responses
- 201 Signed in successfully
application/json
{
referenceId: string
msaId?: string
publicKey?: string
}
[GET]/v1/accounts/{msaId}
- Summary
Fetch an account given an MSA Id
Responses
- 200 Found account
application/json
{
msaId: string
handle: {
base_handle: string
canonical_base: string
suffix: number
}
}
[GET]/v1/accounts/account/{accountId}
- Summary
Fetch an account given an Account Id
Responses
- 200 Found account
application/json
{
msaId: string
handle: {
base_handle: string
canonical_base: string
suffix: number
}
}
[GET]/v1/accounts/retireMsa/{accountId}
- Summary
Get a retireMsa unsigned, encoded extrinsic payload.
Responses
- 200 Created extrinsic
application/json
{
// Hex-encoded representation of the "RetireMsa" extrinsic
encodedExtrinsic: string
// payload to be signed
payloadToSign: string
// AccountId in hex or SS58 format
accountId: string
}
[POST]/v1/accounts/retireMsa
- Summary
Request to retire an MSA ID.
RequestBody
- application/json
{
// Hex-encoded representation of the "RetireMsa" extrinsic
encodedExtrinsic: string
// payload to be signed
payloadToSign: string
// AccountId in hex or SS58 format
accountId: string
// signature of the owner
signature: string
}
Responses
- 201 Created and queued request to retire an MSA ID
application/json
{
referenceId: string
}
[GET]/v2/delegations/{msaId}
- Summary
Get all delegation information associated with an MSA Id
Responses
- 200 Found delegation information
application/json
{
msaId: string
delegations: {
providerId: string
schemaDelegations: {
schemaId: number
revokedAtBlock?: number
}[]
revokedAtBlock?: number
}[]
}
[GET]/v2/delegations/{msaId}/{providerId}
- Summary
Get an MSA's delegation information for a specific provider
Responses
- 200 Found delegation information
application/json
{
msaId: string
delegations: {
providerId: string
schemaDelegations: {
schemaId: number
revokedAtBlock?: number
}[]
revokedAtBlock?: number
}[]
}
[GET]/v1/delegation/{msaId}
- Summary
Get the delegation information associated with an MSA Id
Responses
- 200 Found delegation information
application/json
{
providerId: string
schemaPermissions: {
}
revokedAt: {
}
}
[GET]/v1/delegation/revokeDelegation/{accountId}/{providerId}
- Summary
Get a properly encoded RevokeDelegationPayload that can be signed
Responses
- 200 Returned an encoded RevokeDelegationPayload for signing
application/json
{
// AccountId in hex or SS58 format
accountId: string
// MSA Id of the provider to whom the requesting user wishes to delegate
providerId: string
// Hex-encoded representation of the "revokeDelegation" extrinsic
encodedExtrinsic: string
// payload to be signed
payloadToSign: string
}
[POST]/v1/delegation/revokeDelegation
- Summary
Request to revoke a delegation
RequestBody
- application/json
{
// AccountId in hex or SS58 format
accountId: string
// MSA Id of the provider to whom the requesting user wishes to delegate
providerId: string
// Hex-encoded representation of the "revokeDelegation" extrinsic
encodedExtrinsic: string
// payload to be signed
payloadToSign: string
// signature of the owner
signature: string
}
Responses
- 201 Created and queued request to revoke a delegation
application/json
{
referenceId: string
}
[POST]/v1/handles
- Summary
Request to create a new handle for an account
RequestBody
- application/json
{
// AccountId in hex or SS58 format
accountId: string
payload: {
// base handle in the request
baseHandle: string
// expiration block number for this payload
expiration: number
}
// proof is the signature for the payload
proof: string
}
Responses
- 200 Handle creation request enqueued
application/json
{
referenceId: string
}
[POST]/v1/handles/change
- Summary
Request to change a handle
RequestBody
- application/json
{
// AccountId in hex or SS58 format
accountId: string
payload: {
// base handle in the request
baseHandle: string
// expiration block number for this payload
expiration: number
}
// proof is the signature for the payload
proof: string
}
Responses
- 200 Handle change request enqueued
application/json
{
referenceId: string
}
[GET]/v1/handles/change/{newHandle}
- Summary
Get a properly encoded ClaimHandlePayload that can be signed.
Responses
- 200 Returned an encoded ClaimHandlePayload for signing
application/json
{
payload: {
// base handle in the request
baseHandle: string
// expiration block number for this payload
expiration: number
}
// Raw encodedPayload is scale encoded of payload in hex format
encodedPayload: string
}
[GET]/v1/handles/{msaId}
- Summary
Fetch a handle given an MSA Id
Responses
- 200 Found a handle
application/json
{
base_handle: string
canonical_base: string
suffix: number
}
[POST]/v1/keys/add
- Summary
Add new control keys for an MSA Id
RequestBody
- application/json
{
// msaOwnerAddress representing the target of this request
msaOwnerAddress: string
// msaOwnerSignature is the signature by msa owner
msaOwnerSignature: string
// newKeyOwnerSignature is the signature with new key
newKeyOwnerSignature: string
payload: {
// MSA Id of the user requesting the new key
msaId: string
// expiration block number for this payload
expiration: number
// newPublicKey in hex format
newPublicKey: string
}
}
Responses
- 200 Found public keys
application/json
{
referenceId: string
}
[GET]/v1/keys/{msaId}
- Summary
Fetch public keys given an MSA Id
Responses
- 200 Found public keys
application/json
{
msaKeys: {
}
}
[GET]/v1/keys/publicKeyAgreements/getAddKeyPayload
- Summary
Get a properly encoded StatefulStorageItemizedSignaturePayloadV2 that can be signed.
Parameters(Query)
msaId: string
newKey: string
Responses
- 200 Returned an encoded StatefulStorageItemizedSignaturePayloadV2 for signing
application/json
{
payload: {
actions: {
// Action Item type
type: enum[ADD_ITEM, DELETE_ITEM]
// encodedPayload to be added
encodedPayload?: string
// index of the item to be deleted
index?: number
}[]
// schemaId related to the payload
schemaId: number
// targetHash related to the stateful storage
targetHash: number
// expiration block number for this payload
expiration: number
}
// Raw encodedPayload to be signed
encodedPayload: string
}
[POST]/v1/keys/publicKeyAgreements
- Summary
Request to add a new public Key
RequestBody
- application/json
{
// AccountId in hex or SS58 format
accountId: string
payload: {
actions: {
// Action Item type
type: enum[ADD_ITEM, DELETE_ITEM]
// encodedPayload to be added
encodedPayload?: string
// index of the item to be deleted
index?: number
}[]
// schemaId related to the payload
schemaId: number
// targetHash related to the stateful storage
targetHash: number
// expiration block number for this payload
expiration: number
}
// proof is the signature for the payload
proof: string
}
Responses
- 200 Add new key request enqueued
application/json
{
referenceId: string
}
[GET]/healthz
- Summary
Check the health status of the service
Responses
- 200 Service is healthy
[GET]/livez
- Summary
Check the live status of the service
Responses
- 200 Service is live
[GET]/readyz
- Summary
Check the ready status of the service
Responses
- 200 Service is ready
References
#/components/schemas/WalletV2RedirectResponseDto
{
// The base64url encoded JSON stringified signed request
signedRequest: string
// A publically available Frequency node for SIWF dApps to connect to the correct chain
frequencyRpcUrl: string
// The compiled redirect url with all the parameters already built in
redirectUrl: string
}
#/components/schemas/WalletV2LoginRequestDto
{
// The code returned from the SIWF v2 Authentication service that can be exchanged for the payload. Required unless an `authorizationPayload` is provided.
authorizationCode?: string
// The SIWF v2 Authentication payload as a JSON stringified and base64url encoded value. Required unless an `authorizationCode` is provided.
authorizationPayload?: string
}
#/components/schemas/GraphKeySubject
{
// The id type of the VerifiedGraphKeyCredential.
id: string
// The encoded public key.
encodedPublicKeyValue: string
// The encoded private key. WARNING: This is sensitive user information!
encodedPrivateKeyValue: string
// How the encoded keys are encoded. Only "base16" (aka hex) currently.
encoding: string
// Any addition formatting options. Only: "bare" currently.
format: string
// The encryption key algorithm.
type: string
// The DSNP key type.
keyType: string
}
#/components/schemas/WalletV2LoginResponseDto
{
// The ss58 encoded MSA Control Key of the login.
controlKey: string
// The user's MSA Id, if one is already created. Will be empty if it is still being processed.
msaId?: string
// The users's validated email
email?: string
// The users's validated SMS/Phone Number
phoneNumber?: string
// The users's Private Graph encryption key.
graphKey?: #/components/schemas/GraphKeySubject
rawCredentials: {
}[]
}
#/components/schemas/WalletLoginConfigResponseDto
{
providerId: string
siwfUrl: string
frequencyRpcUrl: string
}
#/components/schemas/HandleResponseDto
{
base_handle: string
canonical_base: string
suffix: number
}
#/components/schemas/AccountResponseDto
{
msaId: string
handle: {
base_handle: string
canonical_base: string
suffix: number
}
}
#/components/schemas/SiwsPayloadDto
{
message: string
// Signature of the payload
signature: string
}
#/components/schemas/ErrorResponseDto
{
// Error message
message: string
}
#/components/schemas/SignInResponseDto
{
siwsPayload: {
message: string
// Signature of the payload
signature: string
}
error: {
// Error message
message: string
}
}
#/components/schemas/EncodedExtrinsicDto
{
pallet: string
extrinsicName: string
// Hex-encoded representation of the extrinsic
encodedExtrinsic: string
}
#/components/schemas/SignUpResponseDto
{
extrinsics: {
pallet: string
extrinsicName: string
// Hex-encoded representation of the extrinsic
encodedExtrinsic: string
}[]
error: {
// Error message
message: string
}
}
#/components/schemas/WalletLoginRequestDto
{
// The wallet login request information
signIn?: #/components/schemas/SignInResponseDto
signUp: {
extrinsics: {
pallet: string
extrinsicName: string
// Hex-encoded representation of the extrinsic
encodedExtrinsic: string
}[]
error: {
// Error message
message: string
}
}
}
#/components/schemas/WalletLoginResponseDto
{
referenceId: string
msaId?: string
publicKey?: string
}
#/components/schemas/RetireMsaPayloadResponseDto
{
// Hex-encoded representation of the "RetireMsa" extrinsic
encodedExtrinsic: string
// payload to be signed
payloadToSign: string
// AccountId in hex or SS58 format
accountId: string
}
#/components/schemas/RetireMsaRequestDto
{
// Hex-encoded representation of the "RetireMsa" extrinsic
encodedExtrinsic: string
// payload to be signed
payloadToSign: string
// AccountId in hex or SS58 format
accountId: string
// signature of the owner
signature: string
}
#/components/schemas/TransactionResponse
{
referenceId: string
}
#/components/schemas/SchemaDelegation
{
schemaId: number
revokedAtBlock?: number
}
#/components/schemas/Delegation
{
providerId: string
schemaDelegations: {
schemaId: number
revokedAtBlock?: number
}[]
revokedAtBlock?: number
}
#/components/schemas/DelegationResponseV2
{
msaId: string
delegations: {
providerId: string
schemaDelegations: {
schemaId: number
revokedAtBlock?: number
}[]
revokedAtBlock?: number
}[]
}
#/components/schemas/u32
{
}
#/components/schemas/DelegationResponse
{
providerId: string
schemaPermissions: {
}
revokedAt: {
}
}
#/components/schemas/RevokeDelegationPayloadResponseDto
{
// AccountId in hex or SS58 format
accountId: string
// MSA Id of the provider to whom the requesting user wishes to delegate
providerId: string
// Hex-encoded representation of the "revokeDelegation" extrinsic
encodedExtrinsic: string
// payload to be signed
payloadToSign: string
}
#/components/schemas/RevokeDelegationPayloadRequestDto
{
// AccountId in hex or SS58 format
accountId: string
// MSA Id of the provider to whom the requesting user wishes to delegate
providerId: string
// Hex-encoded representation of the "revokeDelegation" extrinsic
encodedExtrinsic: string
// payload to be signed
payloadToSign: string
// signature of the owner
signature: string
}
#/components/schemas/HandlePayloadDto
{
// base handle in the request
baseHandle: string
// expiration block number for this payload
expiration: number
}
#/components/schemas/HandleRequestDto
{
// AccountId in hex or SS58 format
accountId: string
payload: {
// base handle in the request
baseHandle: string
// expiration block number for this payload
expiration: number
}
// proof is the signature for the payload
proof: string
}
#/components/schemas/ChangeHandlePayloadRequest
{
payload: {
// base handle in the request
baseHandle: string
// expiration block number for this payload
expiration: number
}
// Raw encodedPayload is scale encoded of payload in hex format
encodedPayload: string
}
#/components/schemas/KeysRequestPayloadDto
{
// MSA Id of the user requesting the new key
msaId: string
// expiration block number for this payload
expiration: number
// newPublicKey in hex format
newPublicKey: string
}
#/components/schemas/KeysRequestDto
{
// msaOwnerAddress representing the target of this request
msaOwnerAddress: string
// msaOwnerSignature is the signature by msa owner
msaOwnerSignature: string
// newKeyOwnerSignature is the signature with new key
newKeyOwnerSignature: string
payload: {
// MSA Id of the user requesting the new key
msaId: string
// expiration block number for this payload
expiration: number
// newPublicKey in hex format
newPublicKey: string
}
}
#/components/schemas/KeysResponse
{
msaKeys: {
}
}
#/components/schemas/ItemActionType
{
"type": "string",
"description": "Action Item type",
"enum": [
"ADD_ITEM",
"DELETE_ITEM"
]
}
#/components/schemas/ItemActionDto
{
// Action Item type
type: enum[ADD_ITEM, DELETE_ITEM]
// encodedPayload to be added
encodedPayload?: string
// index of the item to be deleted
index?: number
}
#/components/schemas/ItemizedSignaturePayloadDto
{
actions: {
// Action Item type
type: enum[ADD_ITEM, DELETE_ITEM]
// encodedPayload to be added
encodedPayload?: string
// index of the item to be deleted
index?: number
}[]
// schemaId related to the payload
schemaId: number
// targetHash related to the stateful storage
targetHash: number
// expiration block number for this payload
expiration: number
}
#/components/schemas/AddNewPublicKeyAgreementPayloadRequest
{
payload: {
actions: {
// Action Item type
type: enum[ADD_ITEM, DELETE_ITEM]
// encodedPayload to be added
encodedPayload?: string
// index of the item to be deleted
index?: number
}[]
// schemaId related to the payload
schemaId: number
// targetHash related to the stateful storage
targetHash: number
// expiration block number for this payload
expiration: number
}
// Raw encodedPayload to be signed
encodedPayload: string
}
#/components/schemas/AddNewPublicKeyAgreementRequestDto
{
// AccountId in hex or SS58 format
accountId: string
payload: {
actions: {
// Action Item type
type: enum[ADD_ITEM, DELETE_ITEM]
// encodedPayload to be added
encodedPayload?: string
// index of the item to be deleted
index?: number
}[]
// schemaId related to the payload
schemaId: number
// targetHash related to the stateful storage
targetHash: number
// expiration block number for this payload
expiration: number
}
// proof is the signature for the payload
proof: string
}
Account Service
Webhooks API Reference
Open Direct API Reference Page
Method | Path | Description |
---|---|---|
POST | /transaction-notify | Notify transaction |
Reference Table
Name | Path | Description |
---|---|---|
TransactionType | #/components/schemas/TransactionType | |
TxWebhookRspBase | #/components/schemas/TxWebhookRspBase | |
PublishHandleOpts | #/components/schemas/PublishHandleOpts | |
SIWFOpts | #/components/schemas/SIWFOpts | |
PublishKeysOpts | #/components/schemas/PublishKeysOpts | |
PublishGraphKeysOpts | #/components/schemas/PublishGraphKeysOpts | |
TxWebhookOpts | #/components/schemas/TxWebhookOpts | |
PublishHandleWebhookRsp | #/components/schemas/PublishHandleWebhookRsp | |
SIWFWebhookRsp | #/components/schemas/SIWFWebhookRsp | |
PublishKeysWebhookRsp | #/components/schemas/PublishKeysWebhookRsp | |
PublishGraphKeysWebhookRsp | #/components/schemas/PublishGraphKeysWebhookRsp | |
RetireMsaWebhookRsp | #/components/schemas/RetireMsaWebhookRsp | |
RevokeDelegationWebhookRsp | #/components/schemas/RevokeDelegationWebhookRsp | |
TxWebhookRsp | #/components/schemas/TxWebhookRsp |
Path Details
[POST]/transaction-notify
- Summary
Notify transaction
RequestBody
- application/json
{
"oneOf": [
{
"$ref": "#/components/schemas/PublishHandleWebhookRsp"
},
{
"$ref": "#/components/schemas/SIWFWebhookRsp"
},
{
"$ref": "#/components/schemas/PublishKeysWebhookRsp"
},
{
"$ref": "#/components/schemas/PublishGraphKeysWebhookRsp"
},
{
"$ref": "#/components/schemas/RetireMsaWebhookRsp"
},
{
"$ref": "#/components/schemas/RevokeDelegationWebhookRsp"
}
]
}
Responses
-
200 Successful notification
-
400 Bad request
References
#/components/schemas/TransactionType
{
"type": "string",
"enum": [
"CHANGE_HANDLE",
"CREATE_HANDLE",
"SIWF_SIGNUP",
"SIWF_SIGNIN",
"ADD_KEY",
"RETIRE_MSA",
"ADD_PUBLIC_KEY_AGREEMENT",
"REVOKE_DELEGATION"
],
"x-enum-varnames": [
"CHANGE_HANDLE",
"CREATE_HANDLE",
"SIWF_SIGNUP",
"SIWF_SIGNIN",
"ADD_KEY",
"RETIRE_MSA",
"ADD_PUBLIC_KEY_AGREEMENT",
"REVOKE_DELEGATION"
]
}
#/components/schemas/TxWebhookRspBase
{
providerId: string
referenceId: string
msaId: string
transactionType?: enum[CHANGE_HANDLE, CREATE_HANDLE, SIWF_SIGNUP, SIWF_SIGNIN, ADD_KEY, RETIRE_MSA, ADD_PUBLIC_KEY_AGREEMENT, REVOKE_DELEGATION]
}
#/components/schemas/PublishHandleOpts
{
handle: string
}
#/components/schemas/SIWFOpts
{
handle: string
accountId: string
}
#/components/schemas/PublishKeysOpts
{
newPublicKey: string
}
#/components/schemas/PublishGraphKeysOpts
{
schemaId: string
}
#/components/schemas/TxWebhookOpts
{
}
#/components/schemas/PublishHandleWebhookRsp
{
}
#/components/schemas/SIWFWebhookRsp
{
}
#/components/schemas/PublishKeysWebhookRsp
{
}
#/components/schemas/PublishGraphKeysWebhookRsp
{
}
#/components/schemas/RetireMsaWebhookRsp
{
}
#/components/schemas/RevokeDelegationWebhookRsp
{
}
#/components/schemas/TxWebhookRsp
{
"oneOf": [
{
"$ref": "#/components/schemas/PublishHandleWebhookRsp"
},
{
"$ref": "#/components/schemas/SIWFWebhookRsp"
},
{
"$ref": "#/components/schemas/PublishKeysWebhookRsp"
},
{
"$ref": "#/components/schemas/PublishGraphKeysWebhookRsp"
},
{
"$ref": "#/components/schemas/RetireMsaWebhookRsp"
},
{
"$ref": "#/components/schemas/RevokeDelegationWebhookRsp"
}
]
}
Content Publishing Service
The Content Publishing Service allows users to create, post, and manage content on the Frequency network. It supports various content types such as text, images, and videos.
API Reference
Configuration
ℹ️ Feel free to adjust your environment variables to taste. This application recognizes the following environment variables:
Name | Description | Range/Type | Required? | Default |
---|---|---|---|---|
API_PORT | HTTP port that the application listens on | 1025 - 65535 | 3000 | |
ASSET_EXPIRATION_INTERVAL_SECONDS | Number of seconds to keep completed asset entries in the cache before expiring them | > 0 | Y | |
ASSET_UPLOAD_VERIFICATION_DELAY_SECONDS | Base delay in seconds used for exponential backoff while waiting for uploaded assets to be verified available before publishing a content notice | >= 0 | Y | |
BATCH_INTERVAL_SECONDS | Number of seconds between publishing batches. This is so that the service waits a reasonable amount of time for additional content to publish before submitting a batch--it represents a trade-off between maximum batch fullness and minimal wait time for content | > 0 | Y | |
BATCH_MAX_COUNT | Maximum number of items that can be submitted in a single batch | > 0 | Y | |
CACHE_KEY_PREFIX | Prefix to use for Redis cache keys | string | Y | |
CAPACITY_LIMIT | Maximum amount of provider capacity this app is allowed to use (per epoch) type: 'percentage' 'amount' value: number (may be percentage, ie '80', or absolute amount of capacity) | JSON (example) | Y | |
FILE_UPLOAD_MAX_SIZE_IN_BYTES | Max file size (in bytes) allowed for asset upload | > 0 | Y | |
FILE_UPLOAD_COUNT_LIMIT | Max number of files to be able to upload at the same time via one upload call | > 0 | Y | |
FREQUENCY_API_WS_URL | Blockchain API Websocket URL | ws(s): URL | Y | |
FREQUENCY_TIMEOUT_SECS | Frequency chain connection timeout limit; app will terminate if disconnected longer | integer | 10 | |
IPFS_BASIC_AUTH_SECRET | If using Infura, put auth token here, or leave blank for Kubo RPC | string | blank | |
IPFS_BASIC_AUTH_USER | If using Infura, put Project ID here, or leave blank for Kubo RPC | string | blank | |
IPFS_ENDPOINT | URL to IPFS endpoint | URL | Y | |
IPFS_GATEWAY_URL | IPFS gateway URL '[CID]' is a token that will be replaced with an actual content ID | URL template | Y | |
PROVIDER_ACCOUNT_SEED_PHRASE | Seed phrase for provider MSA control key | string | Y | |
PROVIDER_ID | Provider MSA Id | integer | Y | |
REDIS_URL | Connection URL for Redis | URL | Y | |
API_TIMEOUT_MS | Api timeout limit in milliseconds | > 0 | 60000 | |
API_BODY_JSON_LIMIT | Api json body size limit in string (some examples: 100kb or 5mb or etc) | string | 1mb |
Best Practices
- Metadata Management: Always ensure metadata is correctly associated with content to maintain data integrity.
- Content Validation: Validate content to prevent the submission of inappropriate or harmful material.
Content Watcher Service
API Reference
Path Table
Method | Path | Description |
---|---|---|
PUT | /v1/asset/upload | Upload asset files |
POST | /v1/content/{msaId}/broadcast | Create DSNP Broadcast for user |
POST | /v1/content/{msaId}/reply | Create DSNP Reply for user |
POST | /v1/content/{msaId}/reaction | Create DSNP Reaction for user |
PUT | /v1/content/{msaId} | Update DSNP Content for user |
DELETE | /v1/content/{msaId} | Delete DSNP Content for user |
PUT | /v1/profile/{msaId} | Update a user's Profile |
GET | /healthz | Check the health status of the service |
GET | /livez | Check the live status of the service |
GET | /readyz | Check the ready status of the service |
GET | /dev/request/{jobId} | Get a Job given a jobId |
POST | /dev/dummy/announcement/{queueType}/{count} | Create dummy announcement data |
Reference Table
Name | Path | Description |
---|---|---|
FilesUploadDto | #/components/schemas/FilesUploadDto | |
UploadResponseDto | #/components/schemas/UploadResponseDto | |
AssetReferenceDto | #/components/schemas/AssetReferenceDto | |
AssetDto | #/components/schemas/AssetDto | |
TagTypeEnum | #/components/schemas/TagTypeEnum | Identifies the tag type |
TagDto | #/components/schemas/TagDto | |
UnitTypeEnum | #/components/schemas/UnitTypeEnum | The units for radius and altitude (defaults to meters) |
LocationDto | #/components/schemas/LocationDto | |
NoteActivityDto | #/components/schemas/NoteActivityDto | |
BroadcastDto | #/components/schemas/BroadcastDto | |
AnnouncementResponseDto | #/components/schemas/AnnouncementResponseDto | |
ReplyDto | #/components/schemas/ReplyDto | |
ReactionDto | #/components/schemas/ReactionDto | |
ModifiableAnnouncementType | #/components/schemas/ModifiableAnnouncementType | Target announcement type |
UpdateDto | #/components/schemas/UpdateDto | |
TombstoneDto | #/components/schemas/TombstoneDto | |
ProfileActivityDto | #/components/schemas/ProfileActivityDto | |
ProfileDto | #/components/schemas/ProfileDto |
Path Details
[PUT]/v1/asset/upload
- Summary
Upload asset files
RequestBody
- multipart/form-data
{
files?: string[]
}
Responses
- 2XX
application/json
{
assetIds?: string[]
}
[POST]/v1/content/{msaId}/broadcast
- Summary
Create DSNP Broadcast for user
RequestBody
- application/json
{
content: {
// Text content of the note
content: string
// The time of publishing ISO8601
published: string
assets: {
// Determines if this asset is a link
isLink?: boolean
references: {
// The unique Id for the uploaded asset
referenceId: string
// A hint as to the rendering height in device-independent pixels for image or video assets
height?: number
// A hint as to the rendering width in device-independent pixels for image or video asset
width?: number
// Approximate duration of the video or audio asset
duration?: string
}[]
// The display name for the file
name?: string
// The URL for the given content
href?: string
}[]
// The display name for the activity type
name?: string
tag: {
// Identifies the tag type
type: enum[mention, hashtag]
// The text of the tag
name?: string
// Link to the user mentioned
mentionedId?: string
}[]
location: {
// The units for radius and altitude (defaults to meters)
units?: enum[cm, m, km, inches, feet, miles]
// The display name for the location
name: string
// The accuracy of the coordinates as a percentage. (e.g. 94.0 means 94.0% accurate)
accuracy?: number
// The altitude of the location
altitude?: number
// The latitude of the location
latitude?: number
// The longitude of the location
longitude?: number
// The area around the given point that comprises the location
radius?: number
}
}
}
Responses
- 2XX
application/json
{
referenceId: string
}
[POST]/v1/content/{msaId}/reply
- Summary
Create DSNP Reply for user
RequestBody
- application/json
{
// Target DSNP Content URI
inReplyTo: string
content: {
// Text content of the note
content: string
// The time of publishing ISO8601
published: string
assets: {
// Determines if this asset is a link
isLink?: boolean
references: {
// The unique Id for the uploaded asset
referenceId: string
// A hint as to the rendering height in device-independent pixels for image or video assets
height?: number
// A hint as to the rendering width in device-independent pixels for image or video asset
width?: number
// Approximate duration of the video or audio asset
duration?: string
}[]
// The display name for the file
name?: string
// The URL for the given content
href?: string
}[]
// The display name for the activity type
name?: string
tag: {
// Identifies the tag type
type: enum[mention, hashtag]
// The text of the tag
name?: string
// Link to the user mentioned
mentionedId?: string
}[]
location: {
// The units for radius and altitude (defaults to meters)
units?: enum[cm, m, km, inches, feet, miles]
// The display name for the location
name: string
// The accuracy of the coordinates as a percentage. (e.g. 94.0 means 94.0% accurate)
accuracy?: number
// The altitude of the location
altitude?: number
// The latitude of the location
latitude?: number
// The longitude of the location
longitude?: number
// The area around the given point that comprises the location
radius?: number
}
}
}
Responses
- 2XX
application/json
{
referenceId: string
}
[POST]/v1/content/{msaId}/reaction
- Summary
Create DSNP Reaction for user
RequestBody
- application/json
{
// the encoded reaction emoji
emoji: string
// Indicates whether the emoji should be applied and if so, at what strength
apply: number
// Target DSNP Content URI
inReplyTo: string
}
Responses
- 2XX
application/json
{
referenceId: string
}
[PUT]/v1/content/{msaId}
- Summary
Update DSNP Content for user
RequestBody
- application/json
{
// Target announcement type
targetAnnouncementType: enum[broadcast, reply]
// Target DSNP Content Hash
targetContentHash: string
content: {
// Text content of the note
content: string
// The time of publishing ISO8601
published: string
assets: {
// Determines if this asset is a link
isLink?: boolean
references: {
// The unique Id for the uploaded asset
referenceId: string
// A hint as to the rendering height in device-independent pixels for image or video assets
height?: number
// A hint as to the rendering width in device-independent pixels for image or video asset
width?: number
// Approximate duration of the video or audio asset
duration?: string
}[]
// The display name for the file
name?: string
// The URL for the given content
href?: string
}[]
// The display name for the activity type
name?: string
tag: {
// Identifies the tag type
type: enum[mention, hashtag]
// The text of the tag
name?: string
// Link to the user mentioned
mentionedId?: string
}[]
location: {
// The units for radius and altitude (defaults to meters)
units?: enum[cm, m, km, inches, feet, miles]
// The display name for the location
name: string
// The accuracy of the coordinates as a percentage. (e.g. 94.0 means 94.0% accurate)
accuracy?: number
// The altitude of the location
altitude?: number
// The latitude of the location
latitude?: number
// The longitude of the location
longitude?: number
// The area around the given point that comprises the location
radius?: number
}
}
}
Responses
- 2XX
application/json
{
referenceId: string
}
[DELETE]/v1/content/{msaId}
- Summary
Delete DSNP Content for user
RequestBody
- application/json
{
// Target announcement type
targetAnnouncementType: enum[broadcast, reply]
// Target DSNP Content Hash
targetContentHash: string
}
Responses
- 2XX
application/json
{
referenceId: string
}
[PUT]/v1/profile/{msaId}
- Summary
Update a user's Profile
RequestBody
- application/json
{
profile: {
icon: {
// The unique Id for the uploaded asset
referenceId: string
// A hint as to the rendering height in device-independent pixels for image or video assets
height?: number
// A hint as to the rendering width in device-independent pixels for image or video asset
width?: number
// Approximate duration of the video or audio asset
duration?: string
}[]
// Used as a plain text biography of the profile
summary?: string
// The time of publishing ISO8601
published?: string
// The display name for the activity type
name?: string
tag: {
// Identifies the tag type
type: enum[mention, hashtag]
// The text of the tag
name?: string
// Link to the user mentioned
mentionedId?: string
}[]
location: {
// The units for radius and altitude (defaults to meters)
units?: enum[cm, m, km, inches, feet, miles]
// The display name for the location
name: string
// The accuracy of the coordinates as a percentage. (e.g. 94.0 means 94.0% accurate)
accuracy?: number
// The altitude of the location
altitude?: number
// The latitude of the location
latitude?: number
// The longitude of the location
longitude?: number
// The area around the given point that comprises the location
radius?: number
}
}
}
Responses
- 202
[GET]/healthz
- Summary
Check the health status of the service
Responses
- 200 Service is healthy
[GET]/livez
- Summary
Check the live status of the service
Responses
- 200 Service is live
[GET]/readyz
- Summary
Check the ready status of the service
Responses
- 200 Service is ready
[GET]/dev/request/{jobId}
-
Summary
Get a Job given a jobId -
Description
ONLY enabled when ENVIRONMENT="dev".
Responses
- 200
[POST]/dev/dummy/announcement/{queueType}/{count}
-
Summary
Create dummy announcement data -
Description
ONLY enabled when ENVIRONMENT="dev".
Responses
- 201
References
#/components/schemas/FilesUploadDto
{
files?: string[]
}
#/components/schemas/UploadResponseDto
{
assetIds?: string[]
}
#/components/schemas/AssetReferenceDto
{
// The unique Id for the uploaded asset
referenceId: string
// A hint as to the rendering height in device-independent pixels for image or video assets
height?: number
// A hint as to the rendering width in device-independent pixels for image or video asset
width?: number
// Approximate duration of the video or audio asset
duration?: string
}
#/components/schemas/AssetDto
{
// Determines if this asset is a link
isLink?: boolean
references: {
// The unique Id for the uploaded asset
referenceId: string
// A hint as to the rendering height in device-independent pixels for image or video assets
height?: number
// A hint as to the rendering width in device-independent pixels for image or video asset
width?: number
// Approximate duration of the video or audio asset
duration?: string
}[]
// The display name for the file
name?: string
// The URL for the given content
href?: string
}
#/components/schemas/TagTypeEnum
{
"type": "string",
"description": "Identifies the tag type",
"enum": [
"mention",
"hashtag"
]
}
#/components/schemas/TagDto
{
// Identifies the tag type
type: enum[mention, hashtag]
// The text of the tag
name?: string
// Link to the user mentioned
mentionedId?: string
}
#/components/schemas/UnitTypeEnum
{
"type": "string",
"description": "The units for radius and altitude (defaults to meters)",
"enum": [
"cm",
"m",
"km",
"inches",
"feet",
"miles"
]
}
#/components/schemas/LocationDto
{
// The units for radius and altitude (defaults to meters)
units?: enum[cm, m, km, inches, feet, miles]
// The display name for the location
name: string
// The accuracy of the coordinates as a percentage. (e.g. 94.0 means 94.0% accurate)
accuracy?: number
// The altitude of the location
altitude?: number
// The latitude of the location
latitude?: number
// The longitude of the location
longitude?: number
// The area around the given point that comprises the location
radius?: number
}
#/components/schemas/NoteActivityDto
{
// Text content of the note
content: string
// The time of publishing ISO8601
published: string
assets: {
// Determines if this asset is a link
isLink?: boolean
references: {
// The unique Id for the uploaded asset
referenceId: string
// A hint as to the rendering height in device-independent pixels for image or video assets
height?: number
// A hint as to the rendering width in device-independent pixels for image or video asset
width?: number
// Approximate duration of the video or audio asset
duration?: string
}[]
// The display name for the file
name?: string
// The URL for the given content
href?: string
}[]
// The display name for the activity type
name?: string
tag: {
// Identifies the tag type
type: enum[mention, hashtag]
// The text of the tag
name?: string
// Link to the user mentioned
mentionedId?: string
}[]
location: {
// The units for radius and altitude (defaults to meters)
units?: enum[cm, m, km, inches, feet, miles]
// The display name for the location
name: string
// The accuracy of the coordinates as a percentage. (e.g. 94.0 means 94.0% accurate)
accuracy?: number
// The altitude of the location
altitude?: number
// The latitude of the location
latitude?: number
// The longitude of the location
longitude?: number
// The area around the given point that comprises the location
radius?: number
}
}
#/components/schemas/BroadcastDto
{
content: {
// Text content of the note
content: string
// The time of publishing ISO8601
published: string
assets: {
// Determines if this asset is a link
isLink?: boolean
references: {
// The unique Id for the uploaded asset
referenceId: string
// A hint as to the rendering height in device-independent pixels for image or video assets
height?: number
// A hint as to the rendering width in device-independent pixels for image or video asset
width?: number
// Approximate duration of the video or audio asset
duration?: string
}[]
// The display name for the file
name?: string
// The URL for the given content
href?: string
}[]
// The display name for the activity type
name?: string
tag: {
// Identifies the tag type
type: enum[mention, hashtag]
// The text of the tag
name?: string
// Link to the user mentioned
mentionedId?: string
}[]
location: {
// The units for radius and altitude (defaults to meters)
units?: enum[cm, m, km, inches, feet, miles]
// The display name for the location
name: string
// The accuracy of the coordinates as a percentage. (e.g. 94.0 means 94.0% accurate)
accuracy?: number
// The altitude of the location
altitude?: number
// The latitude of the location
latitude?: number
// The longitude of the location
longitude?: number
// The area around the given point that comprises the location
radius?: number
}
}
}
#/components/schemas/AnnouncementResponseDto
{
referenceId: string
}
#/components/schemas/ReplyDto
{
// Target DSNP Content URI
inReplyTo: string
content: {
// Text content of the note
content: string
// The time of publishing ISO8601
published: string
assets: {
// Determines if this asset is a link
isLink?: boolean
references: {
// The unique Id for the uploaded asset
referenceId: string
// A hint as to the rendering height in device-independent pixels for image or video assets
height?: number
// A hint as to the rendering width in device-independent pixels for image or video asset
width?: number
// Approximate duration of the video or audio asset
duration?: string
}[]
// The display name for the file
name?: string
// The URL for the given content
href?: string
}[]
// The display name for the activity type
name?: string
tag: {
// Identifies the tag type
type: enum[mention, hashtag]
// The text of the tag
name?: string
// Link to the user mentioned
mentionedId?: string
}[]
location: {
// The units for radius and altitude (defaults to meters)
units?: enum[cm, m, km, inches, feet, miles]
// The display name for the location
name: string
// The accuracy of the coordinates as a percentage. (e.g. 94.0 means 94.0% accurate)
accuracy?: number
// The altitude of the location
altitude?: number
// The latitude of the location
latitude?: number
// The longitude of the location
longitude?: number
// The area around the given point that comprises the location
radius?: number
}
}
}
#/components/schemas/ReactionDto
{
// the encoded reaction emoji
emoji: string
// Indicates whether the emoji should be applied and if so, at what strength
apply: number
// Target DSNP Content URI
inReplyTo: string
}
#/components/schemas/ModifiableAnnouncementType
{
"type": "string",
"description": "Target announcement type",
"enum": [
"broadcast",
"reply"
]
}
#/components/schemas/UpdateDto
{
// Target announcement type
targetAnnouncementType: enum[broadcast, reply]
// Target DSNP Content Hash
targetContentHash: string
content: {
// Text content of the note
content: string
// The time of publishing ISO8601
published: string
assets: {
// Determines if this asset is a link
isLink?: boolean
references: {
// The unique Id for the uploaded asset
referenceId: string
// A hint as to the rendering height in device-independent pixels for image or video assets
height?: number
// A hint as to the rendering width in device-independent pixels for image or video asset
width?: number
// Approximate duration of the video or audio asset
duration?: string
}[]
// The display name for the file
name?: string
// The URL for the given content
href?: string
}[]
// The display name for the activity type
name?: string
tag: {
// Identifies the tag type
type: enum[mention, hashtag]
// The text of the tag
name?: string
// Link to the user mentioned
mentionedId?: string
}[]
location: {
// The units for radius and altitude (defaults to meters)
units?: enum[cm, m, km, inches, feet, miles]
// The display name for the location
name: string
// The accuracy of the coordinates as a percentage. (e.g. 94.0 means 94.0% accurate)
accuracy?: number
// The altitude of the location
altitude?: number
// The latitude of the location
latitude?: number
// The longitude of the location
longitude?: number
// The area around the given point that comprises the location
radius?: number
}
}
}
#/components/schemas/TombstoneDto
{
// Target announcement type
targetAnnouncementType: enum[broadcast, reply]
// Target DSNP Content Hash
targetContentHash: string
}
#/components/schemas/ProfileActivityDto
{
icon: {
// The unique Id for the uploaded asset
referenceId: string
// A hint as to the rendering height in device-independent pixels for image or video assets
height?: number
// A hint as to the rendering width in device-independent pixels for image or video asset
width?: number
// Approximate duration of the video or audio asset
duration?: string
}[]
// Used as a plain text biography of the profile
summary?: string
// The time of publishing ISO8601
published?: string
// The display name for the activity type
name?: string
tag: {
// Identifies the tag type
type: enum[mention, hashtag]
// The text of the tag
name?: string
// Link to the user mentioned
mentionedId?: string
}[]
location: {
// The units for radius and altitude (defaults to meters)
units?: enum[cm, m, km, inches, feet, miles]
// The display name for the location
name: string
// The accuracy of the coordinates as a percentage. (e.g. 94.0 means 94.0% accurate)
accuracy?: number
// The altitude of the location
altitude?: number
// The latitude of the location
latitude?: number
// The longitude of the location
longitude?: number
// The area around the given point that comprises the location
radius?: number
}
}
#/components/schemas/ProfileDto
{
profile: {
icon: {
// The unique Id for the uploaded asset
referenceId: string
// A hint as to the rendering height in device-independent pixels for image or video assets
height?: number
// A hint as to the rendering width in device-independent pixels for image or video asset
width?: number
// Approximate duration of the video or audio asset
duration?: string
}[]
// Used as a plain text biography of the profile
summary?: string
// The time of publishing ISO8601
published?: string
// The display name for the activity type
name?: string
tag: {
// Identifies the tag type
type: enum[mention, hashtag]
// The text of the tag
name?: string
// Link to the user mentioned
mentionedId?: string
}[]
location: {
// The units for radius and altitude (defaults to meters)
units?: enum[cm, m, km, inches, feet, miles]
// The display name for the location
name: string
// The accuracy of the coordinates as a percentage. (e.g. 94.0 means 94.0% accurate)
accuracy?: number
// The altitude of the location
altitude?: number
// The latitude of the location
latitude?: number
// The longitude of the location
longitude?: number
// The area around the given point that comprises the location
radius?: number
}
}
}
Content Watcher Service
The Content Watcher Service monitors and retrieves the latest feed state, including content updates, reactions, and other user interactions on the Frequency network. It ensures that applications can stay up-to-date with the latest content and user activity.
API Reference
Configuration
ℹ️ Feel free to adjust your environment variables to taste. This application recognizes the following environment variables:
Name | Description | Range/Type | Required? | Default |
---|---|---|---|---|
API_PORT | HTTP port that the application listens on | 1025 - 65535 | 3000 | |
BLOCKCHAIN_SCAN_INTERVAL_SECONDS | How many seconds to delay between successive scans of the chain for new content (after end of chain is reached) | > 0 | 12 | |
CACHE_KEY_PREFIX | Prefix to use for Redis cache keys | string | content-watcher: | |
FREQUENCY_API_WS_URL | Blockchain API Websocket URL | ws(s): URL | Y | |
FREQUENCY_TIMEOUT_SECS | Frequency chain connection timeout limit; app will terminate if disconnected longer | integer | 10 | |
IPFS_BASIC_AUTH_SECRET | If required for read requests, put Infura auth token here, or leave blank for default Kubo RPC | string | N | blank |
IPFS_BASIC_AUTH_USER | If required for read requests, put Infura Project ID here, or leave blank for default Kubo RPC | string | N | blank |
IPFS_ENDPOINT | URL to IPFS endpoint | URL | Y | |
IPFS_GATEWAY_URL | IPFS gateway URL '[CID]' is a token that will be replaced with an actual content ID | URL template | Y | |
QUEUE_HIGH_WATER | Max number of jobs allowed on the queue before blockchain scan will be paused to allow queue to drain | >= 100 | 1000 | |
REDIS_URL | Connection URL for Redis | URL | Y | |
STARTING_BLOCK | Block number from which the service will start scanning the chain | > 0 | 1 | |
WEBHOOK_FAILURE_THRESHOLD | Number of failures allowed in the provider webhook before the service is marked down | > 0 | 3 | |
WEBHOOK_RETRY_INTERVAL_SECONDS | Number of seconds between provider webhook retry attempts when failing | > 0 | 10 | |
API_TIMEOUT_MS | Api timeout limit in milliseconds | > 0 | 5000 | |
API_BODY_JSON_LIMIT | Api json body size limit in string (some examples: 100kb or 5mb or etc) | string | 1mb |
Best Practices
- Efficient Polling: Implement efficient polling mechanisms to minimize load on the service.
- Webhook Security: Secure webhooks by verifying the source of incoming requests.
- Rate Limiting: Apply rate limiting to prevent abuse and ensure fair usage of the service.
Content Watcher Service
API Reference
Path Table
Method | Path | Description |
---|---|---|
POST | /v1/scanner/reset | Reset blockchain scan to a specific block number or offset from the current position |
GET | /v1/scanner/options | Get the current watch options for the blockchain content event scanner |
POST | /v1/scanner/options | Set watch options to filter the blockchain content scanner by schemas or MSA Ids |
POST | /v1/scanner/pause | Pause the blockchain scanner |
POST | /v1/scanner/start | Resume the blockchain content event scanner |
POST | /v1/search | Search for DSNP content by id and filters, starting from upperBound block and going back for blockCount number of blocks |
POST | /v1/webhooks | Register a webhook to be called when new content is encountered on the chain |
DELETE | /v1/webhooks | Clear all previously registered webhooks |
GET | /v1/webhooks | Get the list of currently registered webhooks |
GET | /healthz | Check the health status of the service |
GET | /livez | Check the live status of the service |
GET | /readyz | Check the ready status of the service |
Reference Table
Name | Path | Description |
---|---|---|
ResetScannerDto | #/components/schemas/ResetScannerDto | |
ChainWatchOptionsDto | #/components/schemas/ChainWatchOptionsDto | |
ContentSearchRequestDto | #/components/schemas/ContentSearchRequestDto | |
HttpStatus | #/components/schemas/HttpStatus | Status of webhook registration response |
SearchResponseDto | #/components/schemas/SearchResponseDto | |
AnnouncementTypeName | #/components/schemas/AnnouncementTypeName | Announcement types to send to the webhook |
WebhookRegistrationDto | #/components/schemas/WebhookRegistrationDto | |
WebhookRegistrationResponseDto | #/components/schemas/WebhookRegistrationResponseDto |
Path Details
[POST]/v1/scanner/reset
- Summary
Reset blockchain scan to a specific block number or offset from the current position
RequestBody
- application/json
{
// The block number to reset the scanner to
blockNumber?: number
// Number of blocks to rewind the scanner to (from `blockNumber` if supplied; else from latest block)
rewindOffset?: number
// Whether to schedule the new scan immediately or wait for the next scheduled interval
immediate?: boolean
}
Responses
- 201
[GET]/v1/scanner/options
- Summary
Get the current watch options for the blockchain content event scanner
Responses
- 200
application/json
{
schemaIds?: number[]
dsnpIds?: string[]
}
[POST]/v1/scanner/options
- Summary
Set watch options to filter the blockchain content scanner by schemas or MSA Ids
RequestBody
- application/json
{
schemaIds?: number[]
dsnpIds?: string[]
}
Responses
- 201
[POST]/v1/scanner/pause
- Summary
Pause the blockchain scanner
Responses
- 201
[POST]/v1/scanner/start
- Summary
Resume the blockchain content event scanner
Parameters(Query)
immediate?: boolean
Responses
- 201
[POST]/v1/search
- Summary
Search for DSNP content by id and filters, starting fromupperBound
block and going back forblockCount
number of blocks
RequestBody
- application/json
{
// An optional client-supplied reference ID by which it can identify the result of this search
clientReferenceId?: string
// The highest block number to start the backward search from
upperBoundBlock?: number
// The number of blocks to scan (backwards)
blockCount: number
// The schemaIds/dsnpIds to filter by
filters?: #/components/schemas/ChainWatchOptionsDto
// A webhook URL to be notified of the results of this search
webhookUrl: string
}
Responses
- 200 Returns a jobId to be used to retrieve the results
application/json
{
// Status of webhook registration response
status: enum[100, 101, 102, 103, 200, 201, 202, 203, 204, 205, 206, 300, 301, 302, 303, 304, 307, 308, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 421, 422, 424, 428, 429, 500, 501, 502, 503, 504, 505]
// Job id of search job
jobId: string
}
[POST]/v1/webhooks
- Summary
Register a webhook to be called when new content is encountered on the chain
RequestBody
- application/json
{
// Announcement types to send to the webhook
announcementTypes?: enum[tombstone, broadcast, reply, reaction, profile, update][]
// Webhook URL
url: string
}
Responses
- 201
[DELETE]/v1/webhooks
- Summary
Clear all previously registered webhooks
Responses
- 200
[GET]/v1/webhooks
- Summary
Get the list of currently registered webhooks
Responses
- 200 Returns a list of registered webhooks
application/json
{
// Status of webhook registration response
status: enum[100, 101, 102, 103, 200, 201, 202, 203, 204, 205, 206, 300, 301, 302, 303, 304, 307, 308, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 421, 422, 424, 428, 429, 500, 501, 502, 503, 504, 505]
registeredWebhooks: {
// Announcement types to send to the webhook
announcementTypes?: enum[tombstone, broadcast, reply, reaction, profile, update][]
// Webhook URL
url: string
}[]
}
[GET]/healthz
- Summary
Check the health status of the service
Responses
- 200 Service is healthy
[GET]/livez
- Summary
Check the live status of the service
Responses
- 200 Service is live
[GET]/readyz
- Summary
Check the ready status of the service
Responses
- 200 Service is ready
References
#/components/schemas/ResetScannerDto
{
// The block number to reset the scanner to
blockNumber?: number
// Number of blocks to rewind the scanner to (from `blockNumber` if supplied; else from latest block)
rewindOffset?: number
// Whether to schedule the new scan immediately or wait for the next scheduled interval
immediate?: boolean
}
#/components/schemas/ChainWatchOptionsDto
{
schemaIds?: number[]
dsnpIds?: string[]
}
#/components/schemas/ContentSearchRequestDto
{
// An optional client-supplied reference ID by which it can identify the result of this search
clientReferenceId?: string
// The highest block number to start the backward search from
upperBoundBlock?: number
// The number of blocks to scan (backwards)
blockCount: number
// The schemaIds/dsnpIds to filter by
filters?: #/components/schemas/ChainWatchOptionsDto
// A webhook URL to be notified of the results of this search
webhookUrl: string
}
#/components/schemas/HttpStatus
{
"type": "number",
"description": "Status of webhook registration response",
"enum": [
100,
101,
102,
103,
200,
201,
202,
203,
204,
205,
206,
300,
301,
302,
303,
304,
307,
308,
400,
401,
402,
403,
404,
405,
406,
407,
408,
409,
410,
411,
412,
413,
414,
415,
416,
417,
418,
421,
422,
424,
428,
429,
500,
501,
502,
503,
504,
505
]
}
#/components/schemas/SearchResponseDto
{
// Status of webhook registration response
status: enum[100, 101, 102, 103, 200, 201, 202, 203, 204, 205, 206, 300, 301, 302, 303, 304, 307, 308, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 421, 422, 424, 428, 429, 500, 501, 502, 503, 504, 505]
// Job id of search job
jobId: string
}
#/components/schemas/AnnouncementTypeName
{
"type": "string",
"description": "Announcement types to send to the webhook",
"enum": [
"tombstone",
"broadcast",
"reply",
"reaction",
"profile",
"update"
]
}
#/components/schemas/WebhookRegistrationDto
{
// Announcement types to send to the webhook
announcementTypes?: enum[tombstone, broadcast, reply, reaction, profile, update][]
// Webhook URL
url: string
}
#/components/schemas/WebhookRegistrationResponseDto
{
// Status of webhook registration response
status: enum[100, 101, 102, 103, 200, 201, 202, 203, 204, 205, 206, 300, 301, 302, 303, 304, 307, 308, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 421, 422, 424, 428, 429, 500, 501, 502, 503, 504, 505]
registeredWebhooks: {
// Announcement types to send to the webhook
announcementTypes?: enum[tombstone, broadcast, reply, reaction, profile, update][]
// Webhook URL
url: string
}[]
}
Content Watcher Service
Webhooks Reference
Method | Path | Description |
---|---|---|
POST | /content-announcements | Notify a webhook client of a content announcement found on the blockchain |
Reference Table
Name | Path | Description |
---|---|---|
AnnouncementType | #/components/schemas/AnnouncementType | |
AnnouncementResponse | #/components/schemas/AnnouncementResponse | |
TypedAnnouncement | #/components/schemas/TypedAnnouncement | |
TombstoneAnnouncement | #/components/schemas/TombstoneAnnouncement | |
BroadcastAnnouncement | #/components/schemas/BroadcastAnnouncement | |
ReplyAnnouncement | #/components/schemas/ReplyAnnouncement | |
ReactionAnnouncement | #/components/schemas/ReactionAnnouncement | |
ProfileAnnouncement | #/components/schemas/ProfileAnnouncement | |
UpdateAnnouncement | #/components/schemas/UpdateAnnouncement |
Path Details
[POST]/content-announcements
- Summary
Notify a webhook client of a content announcement found on the blockchain
RequestBody
- application/json
{
// An optional identifier for the request, may be used for tracking or correlation
requestId?: string
// An optional webhook URL registered as part of a specific search request
webhookUrl?: string
// Identifier for the schema being used or referenced
schemaId: integer
// The block number on the blockchain where this announcement was recorded
blockNumber: integer
announcement: #/components/schemas/TombstoneAnnouncement | #/components/schemas/BroadcastAnnouncement | #/components/schemas/ReplyAnnouncement | #/components/schemas/ReactionAnnouncement | #/components/schemas/ProfileAnnouncement | #/components/schemas/UpdateAnnouncement
}
Responses
-
201 Content announcement notification received
-
400 Bad request
References
#/components/schemas/AnnouncementType
{
"enum": [
0,
2,
3,
4,
5,
6
],
"x-enum-varnames": [
"Tombstone",
"Broadcast",
"Reply",
"Reaction",
"Profile",
"Update"
]
}
#/components/schemas/AnnouncementResponse
{
// An optional identifier for the request, may be used for tracking or correlation
requestId?: string
// An optional webhook URL registered as part of a specific search request
webhookUrl?: string
// Identifier for the schema being used or referenced
schemaId: integer
// The block number on the blockchain where this announcement was recorded
blockNumber: integer
announcement: #/components/schemas/TombstoneAnnouncement | #/components/schemas/BroadcastAnnouncement | #/components/schemas/ReplyAnnouncement | #/components/schemas/ReactionAnnouncement | #/components/schemas/ProfileAnnouncement | #/components/schemas/UpdateAnnouncement
}
#/components/schemas/TypedAnnouncement
{
announcementType: AnnouncementType
fromId: string
}
#/components/schemas/TombstoneAnnouncement
{
"allOf": [
{
"$ref": "#/components/schemas/TypedAnnouncement"
},
{
"type": "object",
"properties": {
"targetAnnouncementType": {
"type": "integer"
},
"targetContentHash": {
"type": "string"
}
},
"required": [
"targetAnnouncementType",
"targetContentHash"
]
}
]
}
#/components/schemas/BroadcastAnnouncement
{
"allOf": [
{
"$ref": "#/components/schemas/TypedAnnouncement"
},
{
"type": "object",
"properties": {
"contentHash": {
"type": "string"
},
"url": {
"type": "string"
}
},
"required": [
"contentHash",
"url"
]
}
]
}
#/components/schemas/ReplyAnnouncement
{
"allOf": [
{
"$ref": "#/components/schemas/TypedAnnouncement"
},
{
"type": "object",
"properties": {
"contentHash": {
"type": "string"
},
"inReplyTo": {
"type": "string"
},
"url": {
"type": "string"
}
},
"required": [
"contentHash",
"inReplyTo",
"url"
]
}
]
}
#/components/schemas/ReactionAnnouncement
{
"allOf": [
{
"$ref": "#/components/schemas/TypedAnnouncement"
},
{
"type": "object",
"properties": {
"emoji": {
"type": "string"
},
"inReplyTo": {
"type": "string"
},
"apply": {
"type": "integer"
}
},
"required": [
"emoji",
"inReplyTo",
"apply"
]
}
]
}
#/components/schemas/ProfileAnnouncement
{
"allOf": [
{
"$ref": "#/components/schemas/TypedAnnouncement"
},
{
"type": "object",
"properties": {
"contentHash": {
"type": "string"
},
"url": {
"type": "string"
}
},
"required": [
"contentHash",
"url"
]
}
]
}
#/components/schemas/UpdateAnnouncement
{
"allOf": [
{
"$ref": "#/components/schemas/TypedAnnouncement"
},
{
"type": "object",
"properties": {
"contentHash": {
"type": "string"
},
"targetAnnouncementType": {
"type": "integer"
},
"targetContentHash": {
"type": "string"
},
"url": {
"type": "string"
}
},
"required": [
"contentHash",
"targetAnnouncementType",
"targetContentHash",
"url"
]
}
]
}
Graph Service
The Graph Service manages the social graphs, including follow/unfollow actions, blocking users, and other social interactions. It allows applications to maintain and query the social connections between users on the Frequency network.
API Reference
Configuration
ℹ️ Feel free to adjust your environment variables to taste. This application recognizes the following environment variables:
Name | Description | Range/Type | Required? | Default |
---|---|---|---|---|
API_PORT | HTTP port that the application listens on | 1025 - 65535 | 3000 | |
BLOCKCHAIN_SCAN_INTERVAL_SECONDS | How many seconds to delay between successive scans of the chain (after end of chain is reached) | > 0 | 180 | |
CACHE_KEY_PREFIX | Prefix to use for Redis cache keys | string | content-watcher: | |
CAPACITY_LIMIT | Maximum amount of provider capacity this app is allowed to use (per epoch) type: 'percentage' 'amount' value: number (may be percentage, ie '80', or absolute amount of capacity) | JSON (example) | Y | |
DEBOUNCE_SECONDS | Number of seconds to retain pending graph updates in the Redis cache to avoid redundant fetches from the chain | >= 0 | ||
FREQUENCY_API_WS_URL | Blockchain API Websocket URL | ws(s): URL | Y | |
FREQUENCY_TIMEOUT_SECS | Frequency chain connection timeout limit; app will terminate if disconnected longer | integer | 10 | |
GRAPH_ENVIRONMENT_TYPE | Graph environment type. | Mainnet|TestnetPaseo | Y | |
WEBHOOK_FAILURE_THRESHOLD | Number of retry attempts to make when sending a webhook notification | > 0 | 3 | |
WEBHOOK_RETRY_INTERVAL_SECONDS | Number of seconds between provider webhook retry attempts when failing | > 0 | 10 | |
PROVIDER_ACCOUNT_SEED_PHRASE | Seed phrase for provider MSA control key | string | Y | |
PROVIDER_ID | Provider MSA Id | integer | Y | |
QUEUE_HIGH_WATER | Max number of jobs allowed on the 'graphUpdateQueue' before blockchain scan will be paused to allow queue to drain | >= 100 | 1000 | |
REDIS_URL | Connection URL for Redis | URL | Y | |
API_TIMEOUT_MS | Api timeout limit in milliseconds | > 0 | 5000 | |
API_BODY_JSON_LIMIT | Api json body size limit in string (some examples: 100kb or 5mb or etc) | string | 1mb | |
AT_REST_ENCRYPTION_KEY_SEED | This secret seed is used for generating an encryption/decryption key for encrypted sensitive data at rest | string | Y |
Best Practices
- Data Integrity: Ensure the integrity of social graph data by implementing robust validation checks.
- Efficient Queries: Optimize queries to handle large social graphs efficiently.
- User Privacy: Protect user privacy by ensuring that graph data is only accessible to authorized entities.
Graph Service
API Reference
Path Table
Method | Path | Description |
---|---|---|
POST | /v1/graphs/getGraphs | Fetch graphs for specified MSA Ids and Block Number |
PUT | /v1/graphs | Request an update to a given user's graph |
GET | /v1/webhooks | Get all registered webhooks |
PUT | /v1/webhooks | Watch graphs for specified dsnpIds and receive updates |
DELETE | /v1/webhooks | Delete all registered webhooks |
GET | /v1/webhooks/users/{msaId} | Get all registered webhooks for a specific MSA Id |
DELETE | /v1/webhooks/users/{msaId} | Delete all webhooks registered for a specific MSA |
GET | /v1/webhooks/urls | Get all webhooks registered to the specified URL |
DELETE | /v1/webhooks/urls | Delete all MSA webhooks registered with the given URL |
GET | /healthz | Check the health status of the service |
GET | /livez | Check the live status of the service |
GET | /readyz | Check the ready status of the service |
Reference Table
Name | Path | Description |
---|---|---|
PrivacyType | #/components/schemas/PrivacyType | Indicator connection type (public or private) |
ConnectionType | #/components/schemas/ConnectionType | Indicator of the type of connection (follow or friendship) |
KeyType | #/components/schemas/KeyType | Key type of graph encryption keypair (currently only X25519 supported) |
GraphKeyPairDto | #/components/schemas/GraphKeyPairDto | |
GraphsQueryParamsDto | #/components/schemas/GraphsQueryParamsDto | |
DsnpGraphEdgeDto | #/components/schemas/DsnpGraphEdgeDto | |
UserGraphDto | #/components/schemas/UserGraphDto | |
Direction | #/components/schemas/Direction | Indicator of the direction of this connection |
ConnectionDto | #/components/schemas/ConnectionDto | |
ConnectionDtoWrapper | #/components/schemas/ConnectionDtoWrapper | |
ProviderGraphDto | #/components/schemas/ProviderGraphDto | |
GraphChangeResponseDto | #/components/schemas/GraphChangeResponseDto | |
WatchGraphsDto | #/components/schemas/WatchGraphsDto |
Path Details
[POST]/v1/graphs/getGraphs
- Summary
Fetch graphs for specified MSA Ids and Block Number
RequestBody
- application/json
{
// Indicator connection type (public or private)
privacyType: enum[private, public]
// Indicator of the type of connection (follow or friendship)
connectionType: enum[follow, friendship]
dsnpIds?: string[]
graphKeyPairs: {
// Key type of graph encryption keypair (currently only X25519 supported)
keyType: enum[X25519]
// Public graph encryption key as a hex string (prefixed with "0x")
publicKey: string
// Private graph encryption key as a hex string (prefixed with "0x")
privateKey: string
}[]
}
Responses
- 200 Graphs retrieved successfully
application/json
{
// MSA Id that is the owner of the graph represented by the graph edges in this object
dsnpId: string
dsnpGraphEdges: {
// MSA Id of the user represented by this graph edge
userId: string
// Block number when connection represented by this graph edge was created
since: number
}[]
// Optional error message if the request failed
errorMessage?: string
}[]
[PUT]/v1/graphs
- Summary
Request an update to a given user's graph
RequestBody
- application/json
{
// MSA Id that owns the connections represented in this object
dsnpId: string
// Array of connections known to the Provider for ths MSA referenced in this object
connections: #/components/schemas/ConnectionDtoWrapper
graphKeyPairs: {
// Key type of graph encryption keypair (currently only X25519 supported)
keyType: enum[X25519]
// Public graph encryption key as a hex string (prefixed with "0x")
publicKey: string
// Private graph encryption key as a hex string (prefixed with "0x")
privateKey: string
}[]
// Optional URL of a webhook to invoke when the request is complete
webhookUrl?: string
}
Responses
- 201 Graph update request created successfully
application/json
{
// Reference ID by which the results/status of a submitted GraphChangeRequest may be retrieved
referenceId: string
}
[GET]/v1/webhooks
- Summary
Get all registered webhooks
Responses
- 200 Retrieved all registered webhooks
[PUT]/v1/webhooks
- Summary
Watch graphs for specified dsnpIds and receive updates
RequestBody
- application/json
{
dsnpIds?: string[]
// Webhook URL to call when graph changes for the referenced MSAs are detected
webhookEndpoint: string
}
Responses
- 200 Successfully started watching graphs
[DELETE]/v1/webhooks
- Summary
Delete all registered webhooks
Responses
- 200 Removed all registered webhooks
[GET]/v1/webhooks/users/{msaId}
- Summary
Get all registered webhooks for a specific MSA Id
Parameters(Query)
includeAll?: boolean
Responses
- 200 Retrieved all registered webhooks for the given MSA Id
application/json
string[]
[DELETE]/v1/webhooks/users/{msaId}
- Summary
Delete all webhooks registered for a specific MSA
Responses
- 200 Removed all registered webhooks for the specified MSA
[GET]/v1/webhooks/urls
- Summary
Get all webhooks registered to the specified URL
Parameters(Query)
url: string
Responses
- 200 Retrieved all watched MSA graphs registered to the specified URL
application/json
string[]
[DELETE]/v1/webhooks/urls
- Summary
Delete all MSA webhooks registered with the given URL
Parameters(Query)
url: string
Responses
- 200 Removed all webhooks registered to the specified URL
[GET]/healthz
- Summary
Check the health status of the service
Responses
- 200 Service is healthy
[GET]/livez
- Summary
Check the live status of the service
Responses
- 200 Service is live
[GET]/readyz
- Summary
Check the ready status of the service
Responses
- 200 Service is ready
References
#/components/schemas/PrivacyType
{
"type": "string",
"description": "Indicator connection type (public or private)",
"enum": [
"private",
"public"
]
}
#/components/schemas/ConnectionType
{
"type": "string",
"description": "Indicator of the type of connection (follow or friendship)",
"enum": [
"follow",
"friendship"
]
}
#/components/schemas/KeyType
{
"type": "string",
"description": "Key type of graph encryption keypair (currently only X25519 supported)",
"enum": [
"X25519"
]
}
#/components/schemas/GraphKeyPairDto
{
// Key type of graph encryption keypair (currently only X25519 supported)
keyType: enum[X25519]
// Public graph encryption key as a hex string (prefixed with "0x")
publicKey: string
// Private graph encryption key as a hex string (prefixed with "0x")
privateKey: string
}
#/components/schemas/GraphsQueryParamsDto
{
// Indicator connection type (public or private)
privacyType: enum[private, public]
// Indicator of the type of connection (follow or friendship)
connectionType: enum[follow, friendship]
dsnpIds?: string[]
graphKeyPairs: {
// Key type of graph encryption keypair (currently only X25519 supported)
keyType: enum[X25519]
// Public graph encryption key as a hex string (prefixed with "0x")
publicKey: string
// Private graph encryption key as a hex string (prefixed with "0x")
privateKey: string
}[]
}
#/components/schemas/DsnpGraphEdgeDto
{
// MSA Id of the user represented by this graph edge
userId: string
// Block number when connection represented by this graph edge was created
since: number
}
#/components/schemas/UserGraphDto
{
// MSA Id that is the owner of the graph represented by the graph edges in this object
dsnpId: string
dsnpGraphEdges: {
// MSA Id of the user represented by this graph edge
userId: string
// Block number when connection represented by this graph edge was created
since: number
}[]
// Optional error message if the request failed
errorMessage?: string
}
#/components/schemas/Direction
{
"type": "string",
"description": "Indicator of the direction of this connection",
"enum": [
"connectionTo",
"connectionFrom",
"bidirectional",
"disconnect"
]
}
#/components/schemas/ConnectionDto
{
// Indicator connection type (public or private)
privacyType: enum[private, public]
// Indicator of the direction of this connection
direction: enum[connectionTo, connectionFrom, bidirectional, disconnect]
// Indicator of the type of connection (follow or friendship)
connectionType: enum[follow, friendship]
// MSA Id representing the target of this connection
dsnpId: string
}
#/components/schemas/ConnectionDtoWrapper
{
data: {
// Indicator connection type (public or private)
privacyType: enum[private, public]
// Indicator of the direction of this connection
direction: enum[connectionTo, connectionFrom, bidirectional, disconnect]
// Indicator of the type of connection (follow or friendship)
connectionType: enum[follow, friendship]
// MSA Id representing the target of this connection
dsnpId: string
}[]
}
#/components/schemas/ProviderGraphDto
{
// MSA Id that owns the connections represented in this object
dsnpId: string
// Array of connections known to the Provider for ths MSA referenced in this object
connections: #/components/schemas/ConnectionDtoWrapper
graphKeyPairs: {
// Key type of graph encryption keypair (currently only X25519 supported)
keyType: enum[X25519]
// Public graph encryption key as a hex string (prefixed with "0x")
publicKey: string
// Private graph encryption key as a hex string (prefixed with "0x")
privateKey: string
}[]
// Optional URL of a webhook to invoke when the request is complete
webhookUrl?: string
}
#/components/schemas/GraphChangeResponseDto
{
// Reference ID by which the results/status of a submitted GraphChangeRequest may be retrieved
referenceId: string
}
#/components/schemas/WatchGraphsDto
{
dsnpIds?: string[]
// Webhook URL to call when graph changes for the referenced MSAs are detected
webhookEndpoint: string
}
Graph Service
Webhooks Reference
Method | Path | Description |
---|---|---|
POST | graph-update | Announce a graph update |
POST | graph-request-status | Send the status of a requested graph update |
Reference Table
Name | Path | Description |
---|---|---|
GraphChangeNotificationV1 | #/components/schemas/GraphChangeNotificationV1 | |
GraphOperationStatusV1 | #/components/schemas/GraphOperationStatusV1 |
Path Details
[POST]graph-update
- Summary
Announce a graph update
RequestBody
- application/json
{
// MSA Id for which this notification is being sent
msaId: string
// Schema ID of graph that was updated
schemaId: number
// Page ID of graph page that was updated/deleted
pageId: number
// integer representation of the content hash of the updated page's previous state
prevContentHash: number
// integer representation of the content hash of the updated pages new state
currContentHash?: number
}
Responses
-
200 Graph update announcement handled
-
400 Bad request
[POST]graph-request-status
- Summary
Send the status of a requested graph update
RequestBody
- application/json
{
// Job reference ID of a previously submitted graph update request
referenceId: string
status: enum[pending, expired, failed, succeeded]
}
Responses
-
200 Graph operation status received
-
400 Bad request
References
#/components/schemas/GraphChangeNotificationV1
{
// MSA Id for which this notification is being sent
msaId: string
// Schema ID of graph that was updated
schemaId: number
// Page ID of graph page that was updated/deleted
pageId: number
// integer representation of the content hash of the updated page's previous state
prevContentHash: number
// integer representation of the content hash of the updated pages new state
currContentHash?: number
}
#/components/schemas/GraphOperationStatusV1
{
// Job reference ID of a previously submitted graph update request
referenceId: string
status: enum[pending, expired, failed, succeeded]
}
Running Frequency Developer Gateway Services
In this section, you will find instructions to help you quickly set up Gateway Services. After testing your application with Gateway Services in your local environment, you can proceed to more advanced deployment options.
Every deployment and production environment is unique. Therefore, we recommend testing your application in a staging environment before deploying it to production. The guides in this section will assist you in getting started with the basics of deploying Gateway Services in various environments, including AWS, Kubernetes, and more.
Look for the Quick Start guide in the Run Gateway Services section to get started with Gateway Services in less than 5 minutes.
DevOps Deployment Quick Reference
Refer to the following sections to get a quick overview of the minimum requirements for deploying individual Gateway Services in different environments.
Gateway Service Common Requirements
The following environment variables are common to all Gateway Services. This snippet from the docker-compose.yaml
file details the x-common-environment
section that is required and shared across all services. Each service will have its own environment variables in addition to these common variables. Environment variables defined with the ${NAME}
syntax read their values from the shell env
, e.g. export NAME=VALUE
. See below for more details.
FREQUENCY_API_WS_URL: ${FREQUENCY_API_WS_URL:-wss://0.rpc.testnet.amplica.io}
SIWF_NODE_RPC_URL: ${SIWF_NODE_RPC_URL:-https://0.rpc.testnet.amplica.io}
REDIS_URL: 'redis://redis:6379'
PROVIDER_ID: ${PROVIDER_ID:-1}
PROVIDER_ACCOUNT_SEED_PHRASE: ${PROVIDER_ACCOUNT_SEED_PHRASE:-//Alice}
WEBHOOK_FAILURE_THRESHOLD: 3
WEBHOOK_RETRY_INTERVAL_SECONDS: 10
HEALTH_CHECK_MAX_RETRIES: 4
HEALTH_CHECK_MAX_RETRY_INTERVAL_SECONDS: 10
HEALTH_CHECK_SUCCESS_THRESHOLD: 10
CAPACITY_LIMIT: '{"type":"percentage", "value":80}'
SIWF_URL: 'https://projectlibertylabs.github.io/siwf/v1/ui'
IPFS_ENDPOINT: ${IPFS_ENDPOINT:-http://ipfs:5001}
IPFS_GATEWAY_URL: ${IPFS_GATEWAY_URL:-https://ipfs.io/ipfs/[CID]}
IPFS_BASIC_AUTH_USER: ${IPFS_BASIC_AUTH_USER:-""}
IPFS_BASIC_AUTH_SECRET: ${IPFS_BASIC_AUTH_SECRET:-""}
QUEUE_HIGH_WATER: 1000
CHAIN_ENVIRONMENT: 'dev'
Each service requires connection to a Redis instance. The REDIS_URL
environment variable is set to redis://redis:6379
by default. If you are using a different Redis instance, you can set the REDIS_URL
environment variable to the appropriate connection string.
Each service also requires a docker network (or equivalent) to connect to any other containers. The default network is set to gateway_net
. If you are using a different network, you can edit the networks:
environment variable in the docker-compose.yaml
to the appropriate network name.
Some services require a connection to an IPFS instance. See the IPFS Setup Guide for more information.
See the docker-compose-swarm.yaml for examples of redis and ipfs services.
Account Service | Details |
---|---|
Docker Image | projectlibertylabs/account-service |
Dependencies | Redis |
API Ports | 3000 |
Inter-Service Ports | 3001, 6379, 9944 |
Docker Compose Services | account-service-api command: account-api |
account-service-worker command: account-worker | |
Required Variables | Account Service Environment Variables |
BLOCKCHAIN_SCAN_INTERVAL_SECONDS | |
TRUST_UNFINALIZED_BLOCKS | |
WEBHOOK_BASE_URL | |
GRAPH_ENVIRONMENT_TYPE | |
CACHE_KEY_PREFIX | |
SIWF_V2_URI_VALIDATION |
Graph Service | Details |
---|---|
Docker Image | projectlibertylabs/graph-service |
Dependencies | Redis, IPFS |
API Ports | 3000 |
Inter-Service Ports | 6379, 9944 |
Docker Compose Services | graph-service-api START_PROCESS: graph-api |
graph-service-worker START_PROCESS: graph-worker | |
Required Variables | Graph Service Environment Variables |
DEBOUNCE_SECONDS | |
GRAPH_ENVIRONMENT_TYPE | |
RECONNECTION_SERVICE_REQUIRED | |
CACHE_KEY_PREFIX | |
AT_REST_ENCRYPTION_KEY_SEED |
Content Publishing Service | Details |
---|---|
Docker Image | projectlibertylabs/content-publishing-service |
Dependencies | Redis, IPFS |
API Ports | 3000 |
Inter-Service Ports | 6379, 9944 |
Docker Compose Services | content-publishing-service-api START_PROCESS: content-publishing-api |
content-publishing-service-worker START_PROCESS: content-publishing-worker | |
Required Variables | Content Publishing Service Environment Variables |
START_PROCESS | |
FILE_UPLOAD_MAX_SIZE_IN_BYTES | |
FILE_UPLOAD_COUNT_LIMIT | |
ASSET_EXPIRATION_INTERVAL_SECONDS | |
BATCH_INTERVAL_SECONDS | |
BATCH_MAX_COUNT | |
ASSET_UPLOAD_VERIFICATION_DELAY_SECONDS | |
CACHE_KEY_PREFIX |
Content Watcher Service | Details |
---|---|
Docker Image | projectlibertylabs/content-watcher-service |
Dependencies | Redis, IPFS |
API Ports | 3000 |
Inter-Service Ports | 6379, 9944 |
Docker Compose Services | content-watcher-service |
Required Variables | Content Watcher Service Environment Variables |
STARTING_BLOCK | |
BLOCKCHAIN_SCAN_INTERVAL_SECONDS | |
WEBHOOK_FAILURE_THRESHOLD | |
CACHE_KEY_PREFIX |
Other Deployment Guides
- Configuring and Managing Scalability
- Deployment on AWS
- Deployment with Kubernetes
- Monitoring with AWS CloudWatch
- NGINX Ingress
- Securing API Access with NGINX and Load Balancers
- Setting up IPFS
- Vault Integration
Running Frequency Developer Gateway Services
Prerequisites
To run this project, you need:
Quick Start
Clone the repository
git clone github.com/ProjectLibertyLabs/gateway.git
cd gateway
Run the following command to configure and start the selected services
./start.sh
start.sh
will guide you through the configuration process to start the services. It will ask a few questions and set the defaults intelligently. The following steps will be taken, and the resulting environment variables will be used by Docker to configure the services:
-
If
./start.sh
has previously been run:- Press Enter to use the previously saved parameters, or
n
to start the configuration process fresh. - If you choose to use the previous saved environment, the selected services will be started with the previously saved parameters immediately.
- Press Enter to use the previously saved parameters, or
-
Press
Enter
to use the published Gateway Services containers (Recommended), or typen
to build the containers locally. If you choose to build the containers locally, you may be interested in viewing the Developer Docs for each service which will have further instructions on running the services locally: -
Press
Enter
to connect to Frequency Paseo Testnet (Recommended), or typen
to connect to a local Frequency node. -
Select the Gateway Services you want to start by answering
y
orn
for each service:- Account Service: Manages user accounts and authentication.
- Graph Service: Handles the creation and querying of social graphs.
- Content Publishing Service: Manages the publishing and distribution of content.
- Content Watcher Service: Monitors the chain for content announcements (new content, updates, etc).
-
Choose the Frequency API Websocket URL for the selected services. The default will be set to the network chosen in step 3.
-
Choose the Sign In With Frequency RPC URL for the selected services. The default will be set to the network chosen in step 3.
-
Enter a Provider ID. See the links provided by
start.sh
for more information on Provider IDs. -
Enter the seed phrase for the Provider ID. This will be used to sign transactions before sending them to the Frequency blockchain.
-
Choose to configure an IPFS Pinning Service or use the default IPFS container. See the IPFS Setup Guide for more information.
-
The configuration will be saved to
$HOME/.projectliberty/.env.gateway-dev
for future use. (NOTE: you can store multiple project profiles stored as$HOME/.projectliberty/.env.<profile-name>
and access them by running the initial command as./start.sh -n <profile-name>
) -
start.sh
usesdocker compose
to start the selected services with the provided configuration. It will print out how to access the services once they are running.
┌──────────────────────────────────────────────────────────────────────────────────────────────┐
│ 🔗💠📡 │
│ 🔗💠📡 The selected services are running. │
│ 🔗💠📡 You can access the Gateway at the following local addresses: │
│ 🔗💠📡 │
│ 🔗💠📡 * account-service: │
│ 🔗💠📡 - API: http://localhost:3013 │
│ 🔗💠📡 - Queue management: http://localhost:3013/queues │
│ 🔗💠📡 - Swagger UI: http://localhost:3013/docs/swagger │
│ 🔗💠📡 - Mock Webhook: http://mock-webhook-logger:3001/webhooks/account-service │
│ 🔗💠📡 (View log messages in docker) │
│ 🔗💠📡 │
│ 🔗💠📡 * graph-service: │
│ 🔗💠📡 - API: http://localhost:3012 │
│ 🔗💠📡 - Queue management: http://localhost:3012/queues │
│ 🔗💠📡 - Swagger UI: http://localhost:3012/docs/swagger │
│ 🔗💠📡 │
│ 🔗💠📡 │
└──────────────────────────────────────────────────────────────────────────────────────────────┘
Environment Variables
For more information on environment variables, see ENVIRONMENT.md
in the developer-docs
directory for your selected service.
IPFS Setup Guide
This guide will walk you through the steps required to set up and configure an IPFS (InterPlanetary File System) node, manage ingress and egress traffic, and explore third-party pinning services. IPFS is a distributed file system that enables decentralized data storage and sharing across peer-to-peer networks.
Table of Contents
- IPFS Setup Guide
Prerequisites
Make sure you have installed the following:
- Ubuntu 20.04+ (or any other Linux distribution).
- Go installed for building IPFS from source.
- npm installed for installing IPFS packages.
You can also run IPFS on Windows or MacOS. Refer to the official IPFS installation guide for details.
1. Installing IPFS
There are two primary ways to install IPFS:
1.1. IPFS Desktop
For a graphical interface, IPFS Desktop is a user-friendly option available for Windows, MacOS, and Linux. Download from the IPFS Desktop page.
1.2. IPFS Daemon
For a more advanced, command-line interface, you can install go-ipfs (the official IPFS implementation) on any Unix-based system.
To install go-ipfs
wget https://dist.ipfs.io/go-ipfs/v0.12.2/go-ipfs_v0.12.2_linux-amd64.tar.gz
tar -xvzf go-ipfs_v0.12.2_linux-amd64.tar.gz
cd go-ipfs
sudo bash install.sh
Verify the installation:
ipfs --version
2. Setting Up IPFS Node
2.1. Initialize IPFS
Once installed, initialize your IPFS repository:
ipfs init
This sets up your local IPFS repository, where data will be stored.
2.2. Starting the IPFS Daemon
To start your node:
ipfs daemon
Your node is now running and part of the global IPFS network. By default, it listens on localhost:5001 for API access and serves content through localhost:8080.
3. IPFS Ingress and Egress
3.1. Managing Ingress Traffic
Ingress refers to incoming traffic or requests for files stored on your node. You can configure your IPFS node to handle specific network interfaces and protocols for managing this traffic. By default, IPFS allows ingress through all public interfaces.
For secure or limited access, consider using IPFS gateways or configuring Nginx or Traefik to act as reverse proxies. Example using Nginx:
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://127.0.0.1:8080; # IPFS local HTTP Gateway
proxy_set_header Host $host;
}
}
You can find more details about reverse proxy configuration in Nginx documentation or Traefik documentation.
3.2. Managing Egress Traffic
Egress refers to outgoing traffic, including requests your node makes to the IPFS network for files it doesn't have locally. You can limit this traffic by adjusting your node's bandwidth profile:
ipfs config profile apply lowpower
For a more detailed network configuration, refer to IPFS's networking documentation.
4. Using Third-Party Pinning Services
Pinning services allow you to store and replicate content across multiple IPFS nodes, ensuring data persists even if your local node goes offline.
Here are some popular services:
4.1. Pinata
Pinata is one of the most widely used IPFS pinning services. It offers simple integration and a user-friendly interface for managing your pinned content.
To use Pinata:
- Create an account at Pinata.
- Get your API key from the dashboard.
- Use the Pinata API or integrate it directly with your IPFS node for automated pinning.
4.2. Web3.Storage
Web3.Storage provides free decentralized storage using IPFS and Filecoin. It's an excellent solution for developers working in the Web3 space.
To use Web3.Storage:
- Sign up at web3.storage.
- Use the Web3.Storage client or API to interact with your pinned data.
4.3. Filebase
Filebase offers an S3-compatible interface for storing files on IPFS. It simplifies managing large-scale IPFS deployments with a more familiar cloud-based experience.
To use Filebase:
- Create a free account on Filebase.
- Configure your IPFS client to connect to Filebase using their API.
5. Verifying and Managing IPFS Node
To ensure your IPFS node is running correctly, check the status of your node:
ipfs id
This command returns your node's Peer ID, which you can share so others may access your content.
You can also monitor your node's performance:
ipfs stats bitswap
For additional management tasks like peer discovery, publishing content, and garbage collection, refer to the IPFS documentation.
Conclusion
By following this guide, you've successfully set up an IPFS node, configured ingress and egress, and learned about third-party pinning services for enhanced data availability. With IPFS, you are now part of a decentralized network for distributed storage and sharing.
For more advanced configurations or integrating IPFS with your services, consider exploring additional features such as IPFS Cluster for node orchestration or Filecoin for long-term decentralized storage.
Deploying Frequency Developer Gateway Services on AWS EC2
This guide provides example step-by-step instructions to deploy the Gateway services on AWS EC2 instances using Docker Swarm and Kubernetes. You may have to modify these instructions based on your actual AWS configuration. These instructions are provided as a general guide and may also be adapted for other cloud providers. Part 3 also includes Terraform examples to automate the deployments in a cloud-agnostic manner.
Table of Contents
- Deploying Frequency Developer Gateway Services on AWS EC2
- Table of Contents
- Prerequisites
- Part 1: Deploying with Docker Swarm
- Part 2: Deploying with Kubernetes
- Part 3: Automating with Terraform
- Conclusion
Prerequisites
- AWS Account: Access to create EC2 instances.
- AWS CLI configured with your AWS credentials and appropriate permissions.
- Terraform installed on your local machine.
- SSH Key Pair for accessing EC2 instances.
- Basic Knowledge: Familiarity with Docker, Kubernetes, and Terraform.
Part 1: Deploying with Docker Swarm
1.1 Setting Up AWS EC2 Instances
Step 1: Launch EC2 Instances
- AMI: Use the latest Ubuntu Server LTS.
- Instance Type:
t2.medium
or higher. - Number of Instances: At least 3 (1 manager, 2 workers).
- Security Group:
- Allow SSH (port 22).
- Allow Docker Swarm ports:
- TCP 2376 (Docker daemon).
- TCP/UDP 7946 (communication among nodes).
- UDP 4789 (overlay network traffic).
- TCP 2377 (Swarm manager communication).
- Allow Gateway service ports (Note: This will depend on your Swarm mappings. The default starts at 30000)
- TCP 30000-32767 (Swarm mode routing mesh).
- OR specific ports for each service, see
SERVICE_PORT_X
in docker-compose-swarm.yaml
Step 2: Configure SSH Access
- Attach your SSH key pair to the instances.
- Note the public IP addresses for each instance.
1.2 Installing Docker Swarm
Log in to each EC2 instance (or other cloud instances) using SSH.
Step 1: Update Packages
sudo apt-get update
Step 2: Install Docker Engine
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
Step 3: Initialize Swarm on Manager Node
On the manager node:
sudo docker swarm init --advertise-addr <manager-node-ip>
Step 4: Join Worker Nodes to Swarm
On the manager node, get the join token:
>sudo docker swarm join-token worker
docker swarm join --token SWMTKN-1-1tbk3g4qxoshrnzmx6a3fzoz9yyf6wxtaca33xwnt2fykd95et-1je480mao8ubve9xesiq3dym2 <manager-node-ip>:2377
Save the join token for later use. On each worker node, run the join command provided, e.g.:
sudo docker swarm join --token <token> <manager-node-ip>:2377 --advertise-addr <worker-node-ip>
Once you have your entire Swarm cluster set up, check the status on the manager node:
sudo docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
q2lq4y0tzuwbrb17kddignc0p ip-10-173-10-61 Ready Active 27.3.1
6j201nmxjf54zwhjya0xxbl3d * ip-10-173-10-112 Ready Active Leader 24.0.7
ylett3pu2wz1p4heo1vdhz20w ip-10-173-11-194 Ready Active 27.3.1
1.3 Deploying Gateway Services
Step 1: Clone the Gateway Repository
git clone https://github.com/ProjectLibertyLabs/gateway.git
cd gateway/deployment/swarm
Step 2: Deploy the Stack
The repo includes an example docker-compose-swarm.yaml file for deploying the Gateway services on Docker Swarm. Edit the file to set the correct environment variables and service ports. Take note of the number of replicas for each service. The default is set to 3.
Note: docker-compose-swarm.yaml
is the only file required to deploy the Gateway services on Docker Swarm. If you prefer not to clone the entire repository, you can copy this file to your Docker Swarm Manager node.
sudo docker stack deploy -c docker-compose-swarm.yaml gateway
Step 3: Verify the Deployment
>sudo docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
y3bkq23881md gateway_account-service-api replicated 3/3 projectlibertylabs/account-service:latest *:30000->3000/tcp
yp455xvoa9gz gateway_account-service-worker replicated 3/3 projectlibertylabs/account-service:latest
y263ft5sbvhz gateway_redis replicated 3/3 redis:latest *:30001->6379/tcp
This stack was deployed without setting the SERVICE_PORT_X
environment variables, so the default port mappings (30000, 30001) are used.
Step 4: Debugging and other useful commands
If you encounter issues, you can check the logs of the services on the manager node. The manager node will show the logs for all replicas of a service.
docker service logs gateway_account-service-api
docker service logs gateway_account-service-worker
In order to update the stack, edit the docker-compose-swarm.yaml file and use the following command:
sudo docker stack deploy -c docker-compose-swarm.yaml gateway
In order to remove the stack, use the following command:
sudo docker stack rm gateway
You can also check the logs on a specific worker node, by logging in to that worker node and running:
docker ps
docker logs <container-id>
Part 2: Deploying with Kubernetes
2.1 Setting Up AWS EC2 Instances
Step 1: Launch EC2 Instances
- AMI: Use the latest Ubuntu Server LTS.
- Instance Type:
t2.medium
or higher. - Number of Instances: At least 3 (1 master, 2 nodes).
- Security Group:
- Allow SSH (port 22).
- Allow Kubernetes ports:
- TCP 6443 (API server).
- TCP 2379-2380 (etcd server client API).
- TCP 10250 (kubelet API).
- TCP 10251 (kube-scheduler).
- TCP 10252 (kube-controller-manager).
- TCP/UDP 30000-32767 (NodePort Services).
2.2 Installing Kubernetes Cluster
Step 1: Update Packages
On all nodes:
sudo apt-get update
Step 2: Disable Swap
sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab
Step 3: Install Docker
sudo apt-get install -y docker.io
sudo systemctl enable docker
sudo systemctl start docker
Step 4: Install Kubernetes Components
sudo apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
Add Kubernetes repository:
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
Install kubeadm
, kubelet
, and kubectl
:
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Step 5: Initialize Kubernetes Master
On the master node:
sudo kubeadm init --apiserver-advertise-address=<master-node-ip> --pod-network-cidr=192.168.0.0/16
Set up local kubectl
:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Step 6: Install a Pod Network (Weave Net)
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
Step 7: Join Worker Nodes
On the master node, get the join command:
kubeadm token create --print-join-command
On each worker node, run the join command:
sudo kubeadm join <master-node-ip>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>
2.3 Deploying Gateway Services
Step 1: Clone the Gateway Repository
On the master node:
git clone https://github.com/ProjectLibertyLabs/gateway.git
cd gateway
Step 2: Kubernetes Deployment and Service Files
Here we will follow modified instructions from the Kubernetes documentation to deploy the Frequency Developer Gateway using Helm.
Refer to the section "5. Deploying Frequency Developer Gateway" in kubernetes.md
for detailed steps on a local Kubernetes cluster.
Step 2.1: Prepare Helm Chart
An example Helm chart (for example, frequency-gateway
;
Make sure your values.yaml
contains the correct configuration for NodePorts and services.
Sample values.yaml
Excerpt:
Things to consider:
FREQUENCY_URL
- URL of the Frequency Chain APIREDIS_URL
- URL of the Redis serverIPFS_ENDPOINT
: IPFS endpoint for pinning contentIPFS_GATEWAY_URL
: IPFS gateway URL for fetching contentPROVIDER_ACCOUNT_SEED_PHRASE
- Seed phrase for the provider accountPROVIDER_ID
- MSA ID of the provider account
service:
type: NodePort
account:
port: 8080
targetPort: http-account
deploy: true <--- Set to true to deploy
contentPublishing:
port: 8081
targetPort: http-publishing
deploy: true
contentWatcher:
port: 8082
targetPort: http-watcher
deploy: true
graph:
port: 8083
targetPort: http-graph
deploy: true
Step 3: Deploy with Helm
Deploy gateway with Helm:
helm install frequency-gateway deployment/k8s/frequency-gateway/
Once deployed, verify that your Helm release is deployed:
helm list
You should see the status as deployed
.
Step 4: Verify the Deployment
kubectl get deployments
kubectl get pods
kubectl get services
Part 3: Automating with Terraform
3.1 Terraform Configuration
Terraform can automate the provisioning of cloud compute resources. This repo includes example Terraform configurations for deploying EC2 instances on AWS, using Docker Swarm or Kubernetes for orchestration.
terraform/examples/
├── aws-docker-swarm
│ ├── main.tf
│ ├── variables.tf
│ ├── user-data.sh
│ └── outputs.tf
└── aws-k8s-cluster
├── main.tf
├── variables.tf
├── user-data.tftpl
└── outputs.tf
Step 1: Create Directory Structure
mkdir terraform-deployment
cd terraform-deployment
Step 2: Initialize Terraform Configuration
Create a main.tf
file with the following content:
provider "aws" {
region = "us-east-1"
}
resource "aws_instance" "app_server" {
ami = "ami-0c94855ba95c71c99" # Ubuntu Server 18.04 LTS (replace with latest)
instance_type = "t2.medium"
count = 3
key_name = "your-key-pair"
vpc_security_group_ids = [aws_security_group.instance.id]
tags = {
Name = "AppServer-${count.index}"
}
}
resource "aws_security_group" "instance" {
name = "instance-sg"
description = "Allow SSH and required ports"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# Add additional ingress rules as needed
}
output "instance_ips" {
value = aws_instance.app_server.*.public_ip
}
3.2 Provisioning Resources
Step 1: Initialize Terraform
terraform init
Step 2: Plan the Deployment
terraform plan
Step 3: Apply the Deployment
terraform apply
3.3 Deploying with Terraform
While Terraform can provision infrastructure, deploying applications and configuring services like Docker Swarm or Kubernetes requires additional tooling or scripting.
Option 1: Use Provisioners (Not Recommended for Complex Configurations)
You can use Terraform provisioners to run scripts on the instances after they are created.
Example:
resource "aws_instance" "app_server" {
# ... (previous configuration)
provisioner "remote-exec" {
inline = [
"sudo apt-get update",
"sudo apt-get install -y docker.io",
# Additional commands
]
connection {
type = "ssh"
user = "ubuntu"
private_key = file("~/.ssh/your-private-key.pem")
host = self.public_ip
}
}
}
Option 2: Use Configuration Management Tools
For more complex setups, consider using tools like Ansible, Chef, or Puppet in conjunction with Terraform.
Conclusion
This guide walked you through deploying the Gateway services using Docker Swarm and Kubernetes on AWS EC2 instances. It also provided Terraform examples to automate the infrastructure provisioning in a cloud-agnostic way. By following these steps, you can set up and manage your microservices deployment efficiently.
Frequency Developer Gateway Kubernetes Deployment Guide
This guide will help you set up, configure, and test your Kubernetes services on Ubuntu using MicroK8s and kubectl.
Table of Contents
- Frequency Developer Gateway Kubernetes Deployment Guide
- Table of Contents
- Prerequisites
- 1. Installing MicroK8s
- 2. Setting Up MicroK8s
- 3. Enable Kubernetes Add-ons in MicroK8s
- 4. (Optional) Installing
kubectl
- 5. Deploying Frequency Developer Gateway
- 6. Accessing Kubernetes Services
- 7. Finding the Host Machine's IP Address
- 8. Verifying and Troubleshooting
- 9. Tearing Down the Deployment
- 10. Conclusion
Prerequisites
Before starting, ensure the following:
- Ubuntu 20.04+.
- MicroK8s installed and configured.
- Helm installed for managing Kubernetes applications.
- kubectl installed for interacting with Kubernetes clusters. This is optional if you're using
microk8s kubectl
. - Redis installed and running.
- Frequency Chain running and accessible from the Kubernetes cluster.
Check this guide, for more details on installing MicroK8s and installing Helm
1. Installing MicroK8s
Install MicroK8s using the following command:
sudo snap install microk8s --classic --channel=1.28/stable
Once installed, verify the installation:
microk8s status --wait-ready
2. Setting Up MicroK8s
To manage MicroK8s as a regular user, you need to add your user to the microk8s
group:
sudo usermod -aG microk8s $USER
sudo chown -f -R $USER ~/.kube
Then, apply the changes to the current session:
newgrp microk8s
Verify again:
microk8s status --wait-ready
3. Enable Kubernetes Add-ons in MicroK8s
To enhance your cluster functionality, you can enable the following MicroK8s add-ons:
sudo microk8s enable dns ingress storage helm3
- DNS: For service discovery.
- Ingress: To expose services externally.
- Storage: Dynamic storage provisioning.
- Helm3: Helm package manager for Kubernetes.
4. (Optional) Installing kubectl
If kubectl
isn't already installed, you can use the following command to install it:
sudo snap install kubectl --classic
5. Deploying Frequency Developer Gateway
5.1. Prepare Helm Chart
An example Helm chart, for example, frequency-gateway
;
Make sure your values.yaml
contains the correct configuration for NodePorts and services.
Sample values.yaml
Excerpt:
Things to consider:
FREQUENCY_URL
- URL of the Frequency Chain APIREDIS_URL
- URL of the Redis serverIPFS_ENDPOINT
: IPFS endpoint for pinning contentIPFS_GATEWAY_URL
: IPFS gateway URL for fetching contentPROVIDER_ACCOUNT_SEED_PHRASE
- Seed phrase for the provider accountPROVIDER_ID
- MSA Id of the provider account
service:
type: NodePort
account:
port: 8080
targetPort: http-account
deploy: true <--- Set to true to deploy
contentPublishing:
port: 8081
targetPort: http-publishing
deploy: true
contentWatcher:
port: 8082
targetPort: http-watcher
deploy: true
graph:
port: 8083
targetPort: http-graph
deploy: true
5.2. Deploy with Helm
Deploy gateway with Helm:
sudo microk8s helm3 install frequency-gateway deployment/k8s/frequency-gateway/
Once deployed, verify that your Helm release is deployed:
sudo microk8s helm3 list
You should see the status as deployed
.
6. Accessing Kubernetes Services
By default, Kubernetes services are exposed on localhost
. Here's how to access them:
6.1. Accessing via NodePort
After deployment, check the NodePorts:
sudo microk8s kubectl get services
This will show output like:
frequency-gateway NodePort 10.152.183.81 <none> 8080:31780/TCP,8081:30315/TCP,8082:31250/TCP,8083:31807/TCP 8s
The services are accessible via:
- Port 8080:
http://<node-ip>:31780
- Port 8081:
http://<node-ip>:30315
- Port 8082:
http://<node-ip>:31250
- Port 8083:
http://<node-ip>:31807
Note: node-ip
is internal to the Kubernetes cluster. To access the services externally, you need to find the host machine's IP address.
6.2. Port-Forward for Local Testing
If you just need to expose ports for local testing, you can use kubectl port-forward
:
sudo microk8s kubectl port-forward svc/frequency-gateway 3013:8080 &
sudo microk8s kubectl port-forward svc/frequency-gateway 3014:8081 &
sudo microk8s kubectl port-forward svc/frequency-gateway 3015:8082 &
sudo microk8s kubectl port-forward svc/frequency-gateway 3016:8083 &
This will forward traffic from your localhost to the Kubernetes services.
Replace <host-ip>
with the external IP of your host machine.
Access Swagger UI at http://<host-ip>:3013/docs/swagger
7. Finding the Host Machine's IP Address
If you need to access the services externally from another machine on the same network, you need the host machine's IP.
To find the IP address of the host:
hostname -I
This will return a list of IP addresses. Use the first IP (likely the local IP of your machine).
Example:
http://<host-ip>:8080
http://<host-ip>:8081
http://<host-ip>:8082
http://<host-ip>:8083
8. Verifying and Troubleshooting
Check Pods and Services
sudo microk8s kubectl get pods
sudo microk8s kubectl get services
Inspect Pod Logs
If any pods are not running as expected, you can check logs:
sudo microk8s kubectl logs <pod-name>
Checking Resources
sudo microk8s kubectl describe pod <pod-name>
sudo microk8s kubectl describe service <service-name>
9. Tearing Down the Deployment
To delete the Helm release and clean up:
sudo microk8s helm3 uninstall frequency-gateway
Alternatively, to delete all Kubernetes resources:
sudo microk8s kubectl delete all --all
10. Conclusion
You've successfully deployed Frequency Developer Gateway
on Kubernetes and Helm, exposing the services via NodePorts for local access. You can also expand this setup by using Ingress for broader network access or by setting up a cloud-based Kubernetes environment for production deployments.
NGINX Ingress for Frequency Developer Gateway
Table of Contents
- NGINX Ingress for Frequency Developer Gateway
Introduction
In this guide, we will walk through the process of setting up NGINX Ingress for the Frequency Developer Gateway on MicroK8s. This includes configuring Ingress rules, managing paths for various services, and ensuring proper security measures through CORS (Cross-Origin Resource Sharing) configurations.
Prerequisites
- MicroK8s installed and configured.
- Helm installed for managing Kubernetes applications.
- Basic understanding of Kubernetes and Helm concepts.
Setting Up NGINX Ingress
Step 1: Enable NGINX Ingress Controller
To use NGINX Ingress, you must first enable the Ingress controller in MicroK8s:
sudo microk8s enable ingress
This command will deploy the NGINX Ingress controller, which will handle incoming traffic and direct it to the appropriate services based on your Ingress resource configurations.
Step 2: Configure the Ingress Resource
Create an Ingress resource that defines how to route traffic to your services. The Ingress resource will map incoming paths to your application's backend services. Below is a high-level overview of the configurations you'll need:
- Paths: Define the specific paths for each service (e.g.,
/account
,/content-publishing
). - Rewrite Rules: Use rewrite rules to ensure that requests to the Ingress path are forwarded correctly to the appropriate service paths.
Example Configuration
While we will not include full YAML code here, ensure that your Ingress resource includes:
- Annotations for CORS settings to manage cross-origin requests effectively.
- Paths mapped to the correct backend services.
Step 3: Implement CORS Configurations
CORS is essential for allowing or restricting resources requested from another domain. In your Ingress annotations, include the following configurations:
nginx.ingress.kubernetes.io/cors-allow-origin
: Set to*
for development; restrict to specific domains in production.nginx.ingress.kubernetes.io/cors-allow-methods
: Specify the allowed HTTP methods (GET, POST, PUT, DELETE, OPTIONS).nginx.ingress.kubernetes.io/cors-allow-headers
: Define which headers can be included in the request.
Example Annotations
annotations:
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "*"
nginx.ingress.kubernetes.io/cors-allow-methods: "GET, POST, PUT, DELETE, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-headers: "DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization"
Step 4: Deploy the Ingress Resource
After configuring your Ingress resource, deploy it using Helm:
helm install frequency-gateway ./path-to-your-helm-chart
Testing the Ingress Configuration
To test your Ingress setup, you can use curl
to check the various paths defined in your Ingress resource:
# Test the /account path
curl -i http://127.0.0.1/account/docs/swagger
# Test the /content-publishing path
curl -i http://127.0.0.1/content-publishing/some-endpoint
# Test the /content-watcher path
curl -i http://127.0.0.1/content-watcher/some-endpoint
# Test the /graph path
curl -i http://127.0.0.1/graph/some-endpoint
The -i
flag includes the HTTP response headers in the output, which is useful for debugging.
Expected Responses
- A successful request should return a 200 status code along with the expected content.
- A 404 status code indicates that the path is not found, which may require reviewing your Ingress resource configuration.
Best Practices for CORS and Security
- Limit CORS Origins: For production environments, restrict
cors-allow-origin
to only trusted domains instead of using*
. - Use HTTPS: Ensure that your application is served over HTTPS. This can be configured with the
nginx.ingress.kubernetes.io/ssl-redirect
annotation. - Set Security Headers: Add additional security headers to your Ingress annotations to help protect your application from common vulnerabilities.
- Regularly Review Your Configurations: Ensure that your Ingress configurations are reviewed and updated as needed, especially after changes to your services.
Conclusion
Configuring NGINX Ingress for your Frequency Developer Gateway in MicroK8s is a straightforward process that can greatly enhance your application's routing capabilities. By properly setting up paths and CORS configurations, you can ensure that your services are accessible and secure. Always remember to follow best practices for security, especially when dealing with cross-origin requests.
Frequency Developer Gateway - Vault Integration
This guide describes how to set up and integrate HashiCorp Vault for managing and securely storing secrets used by the Frequency Developer Gateway services. The integration allows sensitive data such as API tokens, account seed phrases, and credentials to be securely managed through Vault, rather than hardcoding them in Kubernetes manifests.
Prerequisites
- Vault installed and running.
- Kubernetes cluster (with Frequency Developer Gateway deployed).
- Helm installed (for deploying Vault and Frequency Developer Gateway).
- Vault configured with Kubernetes authentication.
Table of Contents
- Frequency Developer Gateway - Vault Integration
1. Overview
Vault is used to manage and securely inject secrets into the Frequency Developer Gateway services, such as:
- account-service: Stores provider access tokens and account seed phrases.
- content-publishing-service: Manages IPFS authentication secrets.
- content-watcher-service: Handles watcher credentials and IPFS secrets.
- graph-service: Manages provider access tokens and account details.
2. Vault Setup
2.1 Enable Key-Value (KV) Secret Engine
To store secrets in Vault, you must first enable the KV secret engine. You can do this with the following command:
vault secrets enable -path=secret kv
2.2 Create Secrets
Vault allows you to store your secrets under a defined path. For the Frequency Developer Gateway, secrets are stored under paths like secret/data/frequency-gateway/[service-name]
. You can add secrets using the Vault CLI.
For example, to create secrets for the account
service:
vault kv put secret/frequency-gateway/account PROVIDER_ACCESS_TOKEN=<your-access-token> PROVIDER_ACCOUNT_SEED_PHRASE=<your-seed-phrase>
Similarly, create secrets for other services:
-
Content Publishing Service:
vault kv put secret/frequency-gateway/content-publishing IPFS_BASIC_AUTH_USER=<username> IPFS_BASIC_AUTH_SECRET=<password> PROVIDER_ACCOUNT_SEED_PHRASE=<your-seed-phrase>
-
Content Watcher Service:
vault kv put secret/frequency-gateway/content-watcher IPFS_BASIC_AUTH_USER=<username> IPFS_BASIC_AUTH_SECRET=<password>
-
Graph Service:
vault kv put secret/frequency-gateway/graph PROVIDER_ACCESS_TOKEN=<your-access-token> PROVIDER_ACCOUNT_SEED_PHRASE=<your-seed-phrase>
2.3 Set Up Kubernetes Authentication
To allow your Kubernetes cluster to access secrets in Vault, you need to configure Vault's Kubernetes authentication method.
-
Enable Kubernetes Auth Method:
vault auth enable kubernetes
-
Configure the Kubernetes Auth Method:
Get the Kubernetes service account token and CA certificate, then configure Vault with this information:
vault write auth/kubernetes/config \ token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \ kubernetes_host="https://<KUBERNETES_HOST>:6443" \ kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
-
Create a Vault Role for Frequency Developer Gateway:
Bind the Vault role to the appropriate Kubernetes service accounts, which allows the Frequency Developer Gateway pods to retrieve secrets.
vault write auth/kubernetes/role/frequency-gateway-role \ bound_service_account_names=frequency-gateway \ bound_service_account_namespaces=default \ policies=default \ ttl=24h
3. Integrating Vault with Frequency Developer Gateway
3.1 Helm Configuration
Update your values.yaml
file to enable Vault integration for Frequency Developer Gateway deployment. Here's an example configuration:
vault:
enabled: true # Enable Vault integration
address: "http://vault.default.svc.cluster.local:8200"
role: "frequency-gateway-role"
tokenSecretName: "vault-token"
tokenSecret: "root"
secretsPath: "secret/data/frequency-gateway"
For each service (e.g., account-service
, content-publishing-service
) in the Frequency Developer Gateway, add a similar configuration to the values.yaml
file.
Deploy or upgrade the Frequency Developer Gateway Helm chart with:
helm upgrade --install frequency-gateway ./helm/frequency-gateway -f values.yaml
3.2 Creating External Secret and Secret Store
To securely connect Kubernetes resources with Vault, you need to use the External Secrets and Secret Store. This allows Kubernetes services to dynamically fetch secrets from Vault.
-
Create a Secret Store:
Create a
SecretStore
resource to configure how Kubernetes connects to Vault.Example configuration for Vault backend:
apiVersion: external-secrets.io/v1beta1 kind: SecretStore metadata: name: vault-secret-store namespace: default spec: provider: vault: server: "{{ .Values.vault.address }}" path: "secret" version: "v2" auth: tokenSecretRef: name: vault-token key: root
Apply the
SecretStore
configuration:kubectl apply -f secret-store.yaml
-
Create the External Secret:
Example Helm template for
ExternalSecret
resources:{{- if .Values.vault.enabled }} apiVersion: external-secrets.io/v1beta1 kind: ExternalSecret metadata: name: account-secret spec: backendType: vault vault: server: "{{ .Values.vault.address }}" path: "{{ .Values.vault.secretsPath }}/account" version: "v2" auth: tokenSecretRef: name: "{{ .Values.vault.tokenSecretName }}" key: "{{ .Values.vault.tokenSecret }}" data: - key: PROVIDER_ACCESS_TOKEN name: PROVIDER_ACCESS_TOKEN - key: PROVIDER_ACCOUNT_SEED_PHRASE name: PROVIDER_ACCOUNT_SEED_PHRASE --- apiVersion: external-secrets.io/v1beta1 kind: ExternalSecret metadata: name: content-publishing-secret spec: backendType: vault vault: server: "{{ .Values.vault.address }}" path: "{{ .Values.vault.secretsPath }}/content-publishing" version: "v2" auth: tokenSecretRef: name: "{{ .Values.vault.tokenSecretName }}" key: "{{ .Values.vault.tokenSecret }}" data: - key: IPFS_BASIC_AUTH_USER name: IPFS_BASIC_AUTH_USER - key: IPFS_BASIC_AUTH_SECRET name: IPFS_BASIC_AUTH_SECRET - key: PROVIDER_ACCOUNT_SEED_PHRASE name: PROVIDER_ACCOUNT_SEED_PHRASE --- apiVersion: external-secrets.io/v1beta1 kind: ExternalSecret metadata: name: graph-secret spec: backendType: vault vault: server: "{{ .Values.vault.address }}" path: "{{ .Values.vault.secretsPath }}/graph" version: "v2" auth: tokenSecretRef: name: "{{ .Values.vault.tokenSecretName }}" key: "{{ .Values.vault.tokenSecret }}" data: - key: PROVIDER_ACCESS_TOKEN name: PROVIDER_ACCESS_TOKEN - key: PROVIDER_ACCOUNT_SEED_PHRASE name: PROVIDER_ACCOUNT_SEED_PHRASE {{- end }}
3.3 Handling Single-Value Secrets
For single-value secrets, use the ExternalSecret
configuration to point to a specific key.
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: account-secret
spec:
backendType: vault
vault:
server: "{{ .Values.vault.address }}"
path: "{{ .Values.vault.secretsPath }}/account"
version: "v2"
auth:
tokenSecretRef:
name: "{{ .Values.vault.tokenSecretName }}"
key: "{{ .Values.vault.tokenSecret }}"
data:
- key: PROVIDER_ACCESS_TOKEN
name: PROVIDER_ACCESS_TOKEN
4. Accessing Secrets
Once Vault is integrated, services in the Frequency Developer Gateway will automatically retrieve secrets at runtime.
4.1 Using CLI
To manually retrieve secrets using the Vault CLI:
vault kv get secret/frequency-gateway/account
vault kv get secret/frequency-gateway/content-publishing
vault kv get secret/frequency-gateway/content-watcher
vault kv get secret/frequency-gateway/graph
To retrieve specific fields:
vault kv get -field=PROVIDER_ACCESS_TOKEN secret/frequency-gateway/account
5. Troubleshooting
- Vault Access Errors: Ensure that the Kubernetes authentication method is correctly configured, and the service accounts are bound to the appropriate Vault roles.
- Secrets Not Being Retrieved: Double-check your
values.yaml
file for correct Vault paths and the service's configuration for secret access.
For more information, refer to the Vault documentation.
6. References
Securing API Access with NGINX and Load Balancers
In this section, we will discuss best practices for securing API access, focusing on using NGINX as a reverse proxy, handling CORS configurations, and using load balancers to enhance security and scalability.
Note: refer to this guide for setting up NGINX Ingress in Kubernetes.
Table of Contents
- Securing API Access with NGINX and Load Balancers
1. Using NGINX as a Reverse Proxy
NGINX can act as an entry point for your APIs, providing a layer of security by:
- Hiding internal architecture: Clients interact with NGINX, not directly with your services.
- Traffic filtering: Only valid requests are forwarded to the backend services.
- CORS handling: Cross-Origin Resource Sharing (CORS) policies can be enforced to control which external origins are allowed to access the API.
- Rate limiting: You can limit the number of requests from a single client to prevent abuse.
- SSL/TLS Termination: Secure communication can be established by terminating SSL at the proxy layer.
1.1 Example: Enforcing CORS in NGINX
You can configure CORS in your NGINX Ingress as follows:
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "*"
nginx.ingress.kubernetes.io/cors-allow-methods: "GET, POST, PUT, DELETE, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-headers: "DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization"
nginx.ingress.kubernetes.io/cors-expose-headers: "Content-Length,Content-Range"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
1.2 Security Tip 1
- Avoid setting
cors-allow-origin: "*"
. Restrict it to trusted domains to prevent unauthorized access. - Enable strict validation for
Authorization
headers and other sensitive information.
2. Using Load Balancers for Scalability and Security
A load balancer ensures even distribution of traffic across multiple instances of your services. It also contributes to security by:
- DDoS protection: Load balancers can absorb and mitigate large volumes of traffic, ensuring service availability.
- SSL/TLS Termination: This can happen at the load balancer, offloading the processing load from the application layer.
- Session Stickiness: For APIs that require session persistence, the load balancer can keep requests from the same client routed to the same backend instance.
2.1 Using a Load Balancer for TLS Termination
When using a load balancer with TLS termination, all encrypted communications with clients are handled by the load balancer. The load balancer decrypts the traffic and forwards it to NGINX (or your API gateway) as plain HTTP requests. This setup improves performance and security by centralizing certificate management.
Example:
apiVersion: v1
kind: Service
metadata:
name: my-loadbalancer
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 80
protocol: TCP
name: https
selector:
app: my-nginx
In this example, the LoadBalancer listens on port 443 for TLS traffic and forwards it as HTTP (port 80) to NGINX.
2.2 Security Tip 2
- Use a trusted CA for certificates.
- Ensure strict SSL/TLS configurations with up-to-date ciphers and disable weak encryption methods.
3. Best Practices for API Security
- Rate Limiting: Ensure NGINX or your gateway implements rate limiting to avoid API abuse.
- Authentication and Authorization: Use tokens (e.g., OAuth2, JWT) to verify clients and their permissions before granting access.
- Monitoring and Logging: Always log API requests, including their origin and headers, to track potential security issues.
- API Gateway Security: If you use a gateway service (such as Frequency Developer Gateway), ensure it handles secure API routing, load balancing, and traffic filtering.
- DDoS Protection: Use external services like Cloudflare or AWS Shield if you expect large volumes of traffic that might lead to denial-of-service attacks.
3.1 Testing the Setup with curl
To verify your NGINX ingress configurations, use curl
commands to simulate requests and inspect the responses. For example:
curl -i http://<your-nginx-address>/account/docs/swagger
-
This command should return your Swagger UI if the ingress and backend are properly configured.
-
If using CORS, test it with specific headers:
curl -i -H "Origin: http://example.com" http://<your-nginx-address>/account
This helps validate that only allowed origins can access your API.
4. Conclusion
A layered approach to securing API access, using NGINX as a reverse proxy, a load balancer for scaling and TLS termination, along with proper CORS and security configurations, ensures robust protection for your services. Proper testing and regular monitoring further enhance the reliability of your setup.
Scalability Guide for Frequency Developer Gateway
This guide explains how to configure and manage scalability using Kubernetes Horizontal Pod Autoscaler (HPA) for Frequency Developer Gateway to ensure services scale dynamically based on resource usage.
Table of Contents
Introduction
Kubernetes Horizontal Pod Autoscaler (HPA) helps scale your deployment based on real-time resource usage (such as CPU and memory). By configuring HPA for the Frequency Developer Gateway, you ensure your services remain available and responsive under varying loads--scaling out when demand increases and scaling down when resources aren't needed.
Prerequisites
Before implementing autoscaling, ensure that:
- Kubernetes metrics server (or another resource metrics provider) is enabled and running.
- Helm installed for managing Kubernetes applications.
- The deployment for Frequency Developer Gateway is running in your Kubernetes cluster.
Configuring Horizontal Pod Autoscaler
Default Autoscaling Settings
In values.yaml
, autoscaling is controlled with the following parameters:
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 5
targetCPUUtilizationPercentage: 80
targetMemoryUtilizationPercentage: 70
- enabled: Enable or disable autoscaling.
- minReplicas: Minimum number of pod replicas.
- maxReplicas: Maximum number of pod replicas.
- targetCPUUtilizationPercentage: Average CPU utilization target for triggering scaling.
- targetMemoryUtilizationPercentage: Average memory utilization target for triggering scaling.
Metrics for Autoscaling
The Kubernetes HPA uses real-time resource consumption to determine whether to increase or decrease the number of pods. Metrics commonly used include:
- CPU utilization: Scaling based on CPU usage.
- Memory utilization: Scaling based on memory consumption.
You can configure one or both, depending on your resource needs.
Sample Configuration
Here is an example values.yaml
configuration for enabling autoscaling with CPU and memory targets:
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 70
targetMemoryUtilizationPercentage: 75
This setup will ensure the following:
- The number of pod replicas will never go below 2 or above 10.
- Kubernetes will attempt to keep CPU usage around 70% across all pods.
- Kubernetes will attempt to keep memory usage around 75% across all pods.
Resource Limits
Setting resource limits ensures your pods are scheduled appropriately and have the necessary resources to function efficiently. Define limits and requests in the values.yaml
like this:
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
- requests: The minimum CPU and memory a pod needs.
- limits: The maximum CPU and memory a pod can use.
Setting these values ensures that the HPA scales the pods without overloading the system.
Verifying and Monitoring Autoscaling
Once you've enabled autoscaling, you can monitor it using kubectl
:
kubectl get hpa
This will output the current state of the HPA, including current replicas, target utilization, and actual resource usage.
To see the pods scaling in real-time:
kubectl get pods -w
You can also inspect specific metrics with:
kubectl top pods
Troubleshooting
HPA is Not Scaling Pods
If the HPA doesn't seem to be scaling as expected, check the following:
-
Metrics Server: Ensure the metrics server is running properly by checking:
kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"
If this command fails, the metrics server might not be installed or working correctly.
-
HPA Status: Describe the HPA resource to inspect events and scaling behavior:
kubectl describe hpa frequency-gateway
-
Resource Requests: Ensure that the
resources.requests
are defined in your deployment configuration. HPA relies on these to scale based on resource consumption.
Scaling Too Slowly or Too Aggressively
If your services are scaling too slowly or too aggressively, consider adjusting the targetCPUUtilizationPercentage
or targetMemoryUtilizationPercentage
values.
By following this guide, you will have a solid understanding of how to configure Kubernetes autoscaling for your Frequency Developer Gateway services, ensuring they adapt dynamically to workload demands.
Monitoring Frequency-Gateway with AWS CloudWatch
This guide explains how to set up monitoring for the Frequency-Gateway application using AWS CloudWatch for logging and metrics collection. CloudWatch offers in-depth metrics for system performance, container health, and more through Container Insights. For further details on CloudWatch setup, see the AWS CloudWatch Agent on Kubernetes documentation.
Prerequisites
- CloudWatch Agent: Install the CloudWatch Agent in your Kubernetes cluster, typically as a DaemonSet, to ensure metrics are collected from each node.
- IAM Roles: Ensure your cluster has the required permissions to write metrics and logs to AWS CloudWatch. Use IAM roles for service accounts or AWS IAM roles to attach permissions.
Step 1: Define CloudWatch Configuration in values.yaml
Customize your Helm chart's values.yaml
file to enable CloudWatch and specify required parameters. Example:
cloudwatch:
enabled: true
region: "us-east-1"
cluster_name: "MyCluster"
collection_interval: 60
enhanced_container_insights: true
flush_interval: 5
Step 2: Create the CloudWatch ConfigMap
The ConfigMap defines the JSON configuration for the CloudWatch agent. The following Helm template dynamically generates the ConfigMap based on values in values.yaml
:
{{- if .Values.cloudwatch.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:
name: cwagent-config
namespace: amazon-cloudwatch
data:
cwagentconfig.json: |
{
"agent": {
"region": "{{ .Values.cloudwatch.region }}"
},
"logs": {
"metrics_collected": {
"kubernetes": {
"cluster_name": "{{ .Values.cloudwatch.cluster_name }}",
"metrics_collection_interval": {{ .Values.cloudwatch.collection_interval }},
"enhanced_container_insights": {{ .Values.cloudwatch.enhanced_container_insights }}
}
},
"force_flush_interval": {{ .Values.cloudwatch.flush_interval }}
}
}
{{- end }}
This configuration focuses on Kubernetes cluster-level insights, collecting metrics on an interval and enabling enhanced container insights.
Step 3: Deploy the ConfigMap
- Apply the ConfigMap with the configuration to the
amazon-cloudwatch
namespace in your Kubernetes cluster. - Restart the CloudWatch agent pods to load the updated configuration.
Step 4: View Logs and Metrics in CloudWatch
After deploying the CloudWatch agent with the ConfigMap, access CloudWatch to view real-time logs and metrics under your designated log group and cluster name. For more specific metrics and alerts, refer to AWS documentation for configuring CloudWatch Alarms and Dashboards.
For further configuration details, refer to the AWS documentation for the CloudWatch Agent on Kubernetes.
FAQ
Coming soon...
Support Channels
Need help with Frequency Developer Gateway or working through the Frequency ecosystem? There are many different options to meet your needs.
GitHub Issues
Checking and creating a GitHub issue is a easy place to start if you want to ask a question or file a bug report or documentation issue you find.
Frequency Discord
Frequency has a Discord where you can discuss Gateway and connect with the Frequency community.
Direct Partnership
Gateway is built by Project Liberty, a contributor to the Frequency ecosystem. If you want more help or just to connect, contact hello@projectliberty.io.
Community Resources
Frequency Developer Gateway
All Frequency Developer Gateway code is open source, and you are welcome to participate in its development.
Many of the Gateway related tools are also open source:
Frequency
Frequency is built using the Polkadot SDK. Most Polkadot SDK (also called Substrate) tooling works with Frequency.
- Frequency Homepage
- Frequency Documentation
- Frequency GitHub
- Polkadot SDK Documentation
- Polkadot.js Toolkit
- Frequency/Polkadot Wallets
IPFS
IPFS (the InterPlanetary File System) is a peer-to-peer network and protocol designed to make the web faster, safer, and more open. It uses "content-based addressing" which allows content to move to different servers and act as a distributed CDN on demand. While most users rely on providers to maintain the availability of files, users may move services and providers may help them to maintain the availability of their content.
DSNP
Frequency Developer Gateway is built using DSNP (Decentralized Social Media Protocol).
Project Liberty
Project Liberty is a contributor to the Frequency ecosystem and the maintainer of the open source Gateway tool.