Exploring the Benefits of Utilizing Fetch over XMLHttpRequest in React Native

Recently, I attempted to upload images to S3 from my react native app by following a guide provided by Heroku: https://devcenter.heroku.com/articles/s3-upload-node

In essence, I utilized the aws-sdk on my express.js backend to generate pre-signed requests for uploading images to S3 from react native.

Initially, everything was working smoothly. However, when I decided to convert XMLHttpRequests into fetch requests, which are more favored in react native, an issue arose. Despite successfully uploading files to S3, clicking on the image links would only display an empty square instead of the actual image:

Empty square shown instead of image

Upon closer inspection, it appears that the code conversion below might be the root cause:

Original Code:

_uploadFile(file, signedRequest, url){
  const xhr = new XMLHttpRequest();
  xhr.open('PUT', signedRequest);
  xhr.onreadystatechange = () => {
    if(xhr.readyState === 4){
      if(xhr.status === 200){
        console.log("UPLOAD DONE");
      } else {
        alert('ERROR UPLOADING');
      }
    }
  };
  xhr.send(file);
}

Updated Code:

_uploadFile(file, signedRequest, url) {
  let option = {
    method: "PUT",
    headers: {
      "Content-Type": "image/jpeg",
    },
    body: JSON.stringify(file)
  }

  fetch(signedRequest, option)
    .then(res => console.log("UPLOAD DONE"))
    .catch(err => console.log("ERROR UPLOADING: ", err))
}

The file object being uploaded looks as follows:

{
  name: "profileImage",
  type: "image/jpeg",
  uri: 'data:image/jpeg;base64,' + response.data, //just a base64 image string
  isStatic: true
}

If anyone has insights into why this issue may be occurring or has encountered similar challenges, I would greatly appreciate your input! Thank you!

Answer №1

When demonstrating the fetch method, you included a JSON string in the body, but this will not be recognized as an image upload when sent to S3. To ensure proper interpretation, consider creating a FormData object and passing it as the request body. Alternatively, utilizing XHR may be a simpler solution, as indicated by Facebook's approach, despite the age of the referenced comment.

Additionally, whenever possible, opt for local URIs instead of transmitting Base64 encoded data. Transferring large amounts of image data between JavaScript and native can be time-consuming.

Similar questions

If you have not found the answer to your question or you are interested in this topic, then look at other similar questions below or use the search

Make sure to wait for the stored procedure to finish executing before running the Python script in Node/Express using Tedious

I'm struggling to figure out how to properly time the execution of a python script following the completion of a SQL Server stored procedure via a router.post (express4/tedious): router.post('/post/create', textParser, function (req, res) { ...

I am looking to upload a high-resolution image file that is over 2 MB to an S3 bucket using an AWS Lambda function programmed in Python

I am currently facing a challenge with my AWS Lambda function in Python 3 when trying to upload images larger than 1MB to S3. Unfortunately, AWS Lambda does not support multipart upload, so I have resorted to uploading the image as base64 from JSON. Howeve ...

Exploring the power of Django and S3 for seamless direct uploading capabilities

Within my current project, I have successfully implemented and configured S3 storages. However, I am now in the process of setting up direct uploads to S3 using s3 direct. While most of it is functioning properly - allowing users to upload images directly ...

What is the best way to duplicate and transfer several items to the same S3 destination in order to trigger ObjectCreated alerts?

In my S3 bucket, there are already thousands of objects. I've created a lambda function that processes these objects when a file is added to the bucket. My goal is to copy certain objects with a specific pattern and add them back into the same bucket ...

Developing a Node/Express API that initiates an asynchronous Python subprocess

Essentially, it involves spawning a Python subprocess which in turn spawns another asynchronous Python subprocess. However, due to the lengthy title, Node/ExpressJS is supposed to wait for the initial Python subprocess to ensure its successful execution, b ...

Error encountered with S3Cmd: Issue with parameters causing upload failure

When I try to upload a file from my computer to an S3 bucket using the command s3cmd put c:/ok/ok.pdf s3://bucket_name/, I am encountering an error. PARAMETER PROBLEM: NOTHING TO UPLOAD It seems like there is no solution for this issue. I followed the ...

What is the best way to store a document in a specific directory inside an S3 container with Flask?

I'm having trouble saving images in a specific folder within my s3 bucket. Despite specifying the folder path, the images always end up in the root of the bucket. Below is the code snippet I am using: s3 = boto3.client( "s3", aws_access_key_ ...

How to access and retrieve data from a JSON file stored in an S3 bucket using Python

I have been tracking a JSON file stored in the S3 bucket test: { 'Details': "Something" } To retrieve and print the value of the key Details, I am using the following code snippet: s3 = boto3.resource('s3', ...

Error! The function worker.recognize(...).progress is throwing an error. Any ideas on how to resolve this

Here is the code snippet: //Imports const express = require('express'); const app = express(); const fs = require("fs"); const multer = require('multer'); const { createWorker } = require("tesseract.js"); co ...

Convert pandas dataframe into parquet format file

I've encountered delimiter issues when trying to move a CSV file from one S3 bucket to another by converting it to a TXT file. To address this, I attempted to convert the CSV to Parquet files instead, but I'm unsure if I'm following the corr ...

having trouble importing a massive CSV file from an S3 bucket into Python

Today, I'm facing an issue while loading a large csv file from my s3 bucket. Here is the code snippet: import pandas as pd import boto3 import io s3_file_key = 'mydata.csv' bucket = 'data' s3 = boto3.client('s3') obj = ...