Skip to content

How to transition from S3 to Greenfield

Introduction

Greenfield is a blockchain-based decentralized storage solution designed to enhance the decentralization of data ownership and management, allowing users to manage their own data and assets. This platform promotes the development of decentralized applications (dApps) by offering on-chain data permission management and APIs similar to those of Web2, enhancing data security and management capabilities through the introduction of Storage Providers (SPs), which are responsible for providing authentication and storage services.

In terms of permission management, SPs offer a range of authentication services. Unlike AWS S3, which control user permissions through AWS Keys and AWS Secrets, SPs in Greenfield use private keys for permission control. This means that on the Greenfield platform, permission authentication is reliant on blockchain technology, ensuring security and decentralization, while also extending blockchain functionalities, including permission authentication and data storage capabilities.

In Greenfield’s design, users have the freedom to select any SP as their Primary SP, along with additional SPs as Secondary SPs, ensuring both performance and reliability in object storage. Primary SPs are primarily responsible for storing all data segments of an object and directly responding to user read or download requests, whereas Secondary SPs store data blocks generated by Erasure Coding (EC) technology, helping to improve data availability.

Compared to AWS S3, Greenfield’s distributed storage structure not only enhances data durability and recoverability but also ensures data integrity and verifiability using blockchain technology. Through this approach, Greenfield is committed to promoting a new data economy and dApp model construction, improving the transparency and efficiency of data management, and realizing data decentralization management and ownership proof through blockchain technology.

We will now showcase on the SDK through Greenfield and AWS S3.

Init SDK client

S3

const (
    AWSKey          = "mock-aws-key"
    AWSSecret       = "mock-aws-secret"
    Region          = "us-east-1"
)

cfg, err := config.LoadDefaultConfig(context.TODO(),
    config.WithRegion(Region),
    config.WithCredentialsProvider(credentials.NewStaticCredentialsProvider(AWSKey, AWSSecret, "")),
)
handleErr(err, "LoadDefaultConfig")
client := s3.NewFromConfig(cfg)

For AWS S3, the initialization uses the AWS Key and AWS Secret to create an AWS S3 client to allow user interaction. The Region specifies the region where the user’s bucket is located.

Greenfield

const (
    RpcAddr         = "https://gnfd-testnet-fullnode-tendermint-us.bnbchain.org:443" // Greenfield Testnet RPC Address
    ChainId         = "greenfield_5600-1" // Greenfield Testnet Chain ID
    PrivateKey      = "mock-private-key"
)
client, primarySP, err := NewFromConfig(ChainId, RpcAddr, PrivateKey)
handleErr(err, "NewFromConfig")

For Greenfield, the RPC Address and Chain ID are used to select the specific Greenfield network, with the example above being for the Testnet. Users need to interact using a private key exported from their wallet.

Greenfield Mainnet Chain ID: greenfield_1017-1

Greenfield Mainnet RPC

https://greenfield-chain.bnbchain.org:443

https://greenfield-chain-ap.bnbchain.org:443

https://greenfield-chain-eu.bnbchain.org:443

https://greenfield-chain-us.bnbchain.org:443

Create bucket

S3

const (
    BucketName      = "mock-bucket-name"
)

_, err = client.CreateBucket(context.TODO(), &s3.CreateBucketInput{
    Bucket: aws.String(BucketName),
})
handleErr(err, "CreateBucket")

In AWS S3, the CreateBucket method is called with a configuration object specifying the bucket name. The operation is straightforward, reflecting S3’s cloud storage focus, where the primary concern is the creation and management of storage containers in the cloud.

Greenfield

const (
    BucketName      = "mock-bucket-name"
)

_, err = client.CreateBucket(context.TODO(), BucketName, primarySP, types.CreateBucketOptions{})
handleErr(err, "CreateBucket")

For Greenfield, the CreateBucket method also requires a bucket name but includes additional parameters like primarySP, which is obtained during client initialization. The primarySP plays a crucial role in executing and storing corresponding bucket data, indicating a more complex interaction pattern likely due to the blockchain-based nature of Greenfield.

List buckets

S3

const (
    BucketName      = "mock-bucket-name"
)

bucketsList, err := client.ListBuckets(context.TODO(), &s3.ListBucketsInput{})
handleErr(err, "ListBuckets")
for _, bucket := range bucketsList.Buckets {
    fmt.Printf("* %s\n", aws.ToString(bucket.Name))
}

Greenfield

const (
    BucketName      = "mock-bucket-name"
)

bucketsList, err := client.ListBuckets(context.TODO(), types.ListBucketsOptions{})
handleErr(err, "ListBuckets")
for _, bucket := range bucketsList.Buckets {
    fmt.Printf("* %s\n", bucket.BucketInfo.BucketName)
}

During the initialization of the client for both systems, user information is already obtained, allowing for the return of corresponding buckets through a User object. This process indicates that transitioning from AWS S3 to Greenfield can be achieved with minimal effort and cost.

Delete bucket

S3

const (
    BucketName      = "mock-bucket-name"
)

_, err = client.DeleteBucket(context.TODO(), &s3.DeleteBucketInput{
    Bucket: aws.String(BucketName),
})
handleErr(err, "Delete Bucket")

Greenfield

const (
    BucketName      = "mock-bucket-name"
)

_, err = cli.DeleteBucket(context.TODO(), BucketName, types.DeleteBucketOption{})
handleErr(err, "Delete Bucket")

The process of deleting a bucket in both AWS S3 and Greenfield is essentially identical, allowing users to easily delete a bucket by simply using its name.

Create object

S3

const (
    BucketName      = "mock-bucket-name"
    ObjectKey       = "test-api.js"
)

_, err = client.PutObject(context.TODO(), &s3.PutObjectInput{
    Bucket: aws.String(BucketName),
    Key:    aws.String(ObjectKey),
    Body:   file,
    })
handleErr(err, "PutObject")

Greenfield

const (
    BucketName      = "mock-bucket-name"
    ObjectKey       = "test-api.js"
)

txnHash, err := cli.CreateObject(context.TODO(), BucketName, ObjectKey, file, types.CreateObjectOptions{})
handleErr(err, "CreateObject")

err = cli.PutObject(context.TODO(), BucketName, ObjectKey, int64(fileInfo.Size()),
file, types.PutObjectOptions{TxnHash: txnHash})
handleErr(err, "PutObject")

In the process of creating objects, there is a difference between Greenfield and S3. In Greenfield, it is necessary to first create the object before performing the put object operation. This is because Greenfield requires users to create metadata on the Greenfield blockchain before submitting object to the SP, in order to ensure the integrity of the object.

List objects

S3

const (
    BucketName      = "mock-bucket-name"
)

objects, err := client.ListObjectsV2(context.TODO(), &s3.ListObjectsV2Input{
    Bucket: aws.String(BucketName),
})
handleErr(err, "ListObjectsV2")
for _, item := range objects.Contents {
    fmt.Printf("* %s\n", aws.ToString(item.Key))
}

Greenfield

const (
    BucketName      = "mock-bucket-name"
)

objects, err := cli.ListObjects(context.TODO(), BucketName, types.ListObjectsOptions{
})
handleErr(err, "ListObjects")
for _, obj := range objects.Objects {
    log.Printf("* %s\n", obj.ObjectInfo.ObjectName)
}

In both AWS S3 and Greenfield, retrieving all objects within a bucket can be easily accomplished by simply using the bucket’s name. This functionality indicates a user-friendly approach to data management, allowing users to efficiently access and manage the contents of their storage without the need for intricate query parameters or complex configuration.

Delete object

S3

const (
    BucketName      = "mock-bucket-name"
    ObjectKey       = "test-api.js"
)

_, err = client.DeleteObject(context.TODO(), &s3.DeleteObjectInput{
    Bucket: aws.String(BucketName),
    Key:    aws.String(ObjectKey),
})
handleErr(err, "Delete Object")

Greenfield

const (
    BucketName      = "mock-bucket-name"
    ObjectKey       = "test-api.js"
)

_, err = cli.DeleteObject(context.TODO(), BucketName, ObjectKey, types.DeleteObjectOption{})
handleErr(err, "Delete Object")

Deleting an object in both AWS S3 and Greenfield is fundamentally similar, enabling users to effortlessly remove an object by specifying both the bucket name and the object name.

Get Object

S3

const (
    BucketName      = "mock-bucket-name"
    ObjectKey       = "test-api.js"
)

resp, err := client.GetObject(context.TODO(), &s3.GetObjectInput{
    Bucket: aws.String(BucketName),
    Key:    aws.String(ObjectKey),
})
handleErr(err, "GetObject")

Greenfield

const (
    BucketName      = "mock-bucket-name"
    ObjectKey       = "test-api.js"
)

resp, _, err := cli.GetObject(context.TODO(), BucketName, ObjectKey, types.GetObjectOptions{})
handleErr(err, "GetObject")

Summary

AWS S3 and Greenfield offer streamlined data retrieval processes, with users specifying bucket and object names for efficient access. While AWS S3 relies on key-pair authentication and region specification for data locality and access efficiency, Greenfield adopts a blockchain-based approach, using private keys for authentication and RPC Addresses along with Chain IDs for network connectivity. Greenfield enhances service quality through regional RPC endpoints, allowing users to choose the most efficient connection based on their location.

The structural similarity in SDKs for operations like bucket creation is notable, with Greenfield requiring an additional step to obtain a primarySP during client initialization. This minimal difference suggests a smooth transition for S3 users to Greenfield, highlighting the ease of adaptation due to familiar SDK code structures and metadata handling. Moreover, Greenfield introduces a two-step object management process, offering greater control over object lifecycle states than S3’s more straightforward approach. Despite this, the core functionalities remain similar, ensuring that S3 users can quickly adapt to Greenfield’s environment without significant hurdles.

Overall, the transition from AWS S3 to Greenfield is facilitated by similar SDK coding practices and metadata management approaches, making it accessible for users familiar with S3 to migrate to Greenfield’s blockchain-based storage solution with minimal learning curve. This compatibility underscores the potential for seamless adaptation, leveraging existing cloud storage knowledge while navigating the nuances of blockchain technology.

Attached code

Example of Greenfield Integration

package main

import (
    "bytes"
    "context"
    "fmt"
    "io"
    "log"
    "os"
    "time"

    "github.com/bnb-chain/greenfield-go-sdk/client"
    "github.com/bnb-chain/greenfield-go-sdk/types"
)

// The config information is consistent with the testnet of greenfield
// You need to set the privateKey, bucketName, objectName and groupName to make the basic examples work well
const (
    RpcAddr         = "https://gnfd-testnet-fullnode-tendermint-us.bnbchain.org:443"
    ChainId         = "greenfield_5600-1"
    PrivateKey      = "mock-private-key"
    BucketName      = "mock-bucket-name"
    ObjectKey       = "api.js"
    UploadObjectKey = "test-api.js"
    DownloadPath    = "/Users/Desktop/s3test/"
    UploadPath      = "/Users/Desktop/s3test/"
)

func main() {
    cli, primarySP, err := NewFromConfig(ChainId, RpcAddr, PrivateKey)
    handleErr(err, "NewFromConfig")

    // create bucket
    _, err = cli.CreateBucket(context.TODO(), BucketName, primarySP, types.CreateBucketOptions{})
    handleErr(err, "CreateBucket")

    // list buckets
    bucketsList, err := cli.ListBuckets(context.TODO(), types.ListBucketsOptions {
        ShowRemovedBucket: false,
    })
    handleErr(err, "ListBuckets")
    for _, bucket := range bucketsList.Buckets {
        fmt.Printf("* %s\n", bucket.BucketInfo.BucketName)
    }

    // create object
    file, err := os.Open(UploadPath + UploadObjectKey)
    handleErr(err, "PutObject")
    defer file.Close()

    fileInfo, err := file.Stat()
    handleErr(err, "Stat")

    // create object
    txnHash, err := cli.CreateObject(context.TODO(), BucketName, UploadObjectKey, file, types.CreateObjectOptions{})
    handleErr(err, "CreateObject")

    var buf bytes.Buffer
    _, err = io.Copy(&buf, file)

    // put object
    err = cli.PutObject(context.TODO(), BucketName, UploadObjectKey, int64(fileInfo.Size()),
    file, types.PutObjectOptions{TxnHash: txnHash})
    handleErr(err, "PutObject")

    // wait for object having been successfully uploaded
    time.Sleep(10 * time.Second)

    // list objects
    objects, err := cli.ListObjects(context.TODO(), BucketName, types.ListObjectsOptions {
        ShowRemovedObject: false, Delimiter: "", MaxKeys: 100, SPAddress: "",
    })
    handleErr(err, "ListObjects")
    for _, obj := range objects.Objects {
        log.Printf("* %s\n", obj.ObjectInfo.ObjectName)
    }

    // get object
    reader, _, err := cli.GetObject(context.TODO(), BucketName, UploadObjectKey, types.GetObjectOptions{})
    handleErr(err, "GetObject")

    outFile, err := os.Create(DownloadPath + ObjectKey)
    handleErr(err, "DownloadObject")
    defer outFile.Close()

    _, err = io.Copy(outFile, reader)
    handleErr(err, "DownloadObject")
}

func NewFromConfig(chainID, rpcAddress, privateKeyStr string) (client.IClient, string, error) {
    account, err := types.NewAccountFromPrivateKey("test", privateKeyStr)
    if err != nil {
        log.Fatalf("New account from private key error, %v", err)
        return nil, "", err
    }

    cli, err := client.New(chainID, rpcAddress, client.Option{DefaultAccount: account})
    if err != nil {
        log.Fatalf("unable to new greenfield client, %v", err)
        return nil, "", err
    }
    ctx := context.Background()

    // get storage providers list
    spLists, err := cli.ListStorageProviders(ctx, true)
    if err != nil {
        log.Fatalf("fail to list in service sps")
        return nil, "", err
    }
    // choose the first sp to be the primary SP
    primarySP := spLists[0].GetOperatorAddress()
    return cli, primarySP, nil
}

func handleErr(err error, funcName string) {
    if err != nil {
        log.Fatalln("fail to " + funcName + ": " + err.Error())
    }
}

Example of S3 Integration

package main

import (
    "context"
    "fmt"
    "io"
    "log"
    "os"
    "time"

    "github.com/aws/aws-sdk-go-v2/aws"
    "github.com/aws/aws-sdk-go-v2/config"
    "github.com/aws/aws-sdk-go-v2/credentials"
    "github.com/aws/aws-sdk-go-v2/service/s3"
)

const (
    AWSKey          = "mock-aws-key"
    AWSSecret       = "mock-aws-secret"
    Region          = "us-east-1"
    BucketName      = "mock-bucket-name"
    ObjectKey       = "mock-object-name"
    UploadObjectKey = "test-api.js"
    DownloadPath    = "/Users/Desktop/s3test/"
    UploadPath      = "/Users/Desktop/s3test/"
)

func main() {
    // set up aws s3 config
    cfg, err := config.LoadDefaultConfig(context.TODO(),
        config.WithRegion(Region),
        config.WithCredentialsProvider(credentials.NewStaticCredentialsProvider(AWSKey, AWSSecret, "")),
    )
    handleErr(err, "LoadDefaultConfig")
    client := s3.NewFromConfig(cfg)

    // create bucket
    _, err = client.CreateBucket(context.TODO(), &s3.CreateBucketInput {
        Bucket: aws.String(BucketName),
    })
    handleErr(err, "CreateBucket")

    // list buckets by owner
    result, err := client.ListBuckets(context.TODO(), &s3.ListBucketsInput{})
    handleErr(err, "ListBuckets")
    for _, bucket := range result.Buckets {
        fmt.Printf("* %s\n", aws.ToString(bucket.Name))
    }

    // create object
    file, err := os.Open(UploadPath + UploadObjectKey)
    handleErr(err, "PutObject")
    defer file.Close()

    _, err = client.PutObject(context.TODO(), &s3.PutObjectInput {
        Bucket: aws.String(BucketName),
        Key:    aws.String(UploadObjectKey),
        Body:   file,
    })
    handleErr(err, "PutObject")

    // wait for object having been successfully uploaded
    time.Sleep(10 * time.Second)
    objects, err := client.ListObjectsV2(context.TODO(), &s3.ListObjectsV2Input {
        Bucket: aws.String(BucketName),
    })
    handleErr(err, "ListObjectsV2")
    for _, item := range objects.Contents {
        fmt.Printf("* %s\n", aws.ToString(item.Key))
    }

    // download object
    resp, err := client.GetObject(context.TODO(), &s3.GetObjectInput {
        Bucket: aws.String(BucketName),
        Key:    aws.String(UploadObjectKey),
    })
    handleErr(err, "DownloadObject")

    defer resp.Body.Close()

    outFile, err := os.Create(DownloadPath + ObjectKey)
    handleErr(err, "DownloadObject")

    defer outFile.Close()

    _, err = io.Copy(outFile, resp.Body)
    handleErr(err, "DownloadObject")

    _, err = client.DeleteObject(context.TODO(), &s3.DeleteObjectInput {
        Bucket: aws.String(BucketName),
        Key:    aws.String(ObjectKey),
    })
    handleErr(err, "Delete Object")

    _, err = client.DeleteBucket(context.TODO(), &s3.DeleteBucketInput {
        Bucket: aws.String(BucketName),
    })
    handleErr(err, "Delete Bucket")
}

func handleErr(err error, funcName string) {
    if err != nil {
        log.Fatalln("fail to " + funcName + ": " + err.Error())
    }
}