Upload System for Custom CRMs Using AWS S3

Building a File Upload System for Custom CRMs Using AWS S3

The Problem

Custom CRM systems need secure, scalable file storage for contracts, invoices, product images, and customer documents. Storing files directly in your application server or database creates problems: server disk space fills quickly, database bloat slows queries, backups become massive, and serving large files consumes application bandwidth. When a sales rep uploads a 50MB contract PDF, your Node.js server blocks while processing the upload, freezing the entire application. Worse, if a server crashes or scales down, uploaded files vanish. You need offloaded storage with direct-to-S3 uploads, presigned URLs for secure downloads, file metadata tracking in your database, automatic virus scanning, and proper access controls that prevent customers from accessing each other’s documents. However, AWS S3 requires IAM policy configuration, bucket CORS setup, presigned URL generation with expiration times, multipart upload handling for large files, and integrating upload status with your CRM’s contact/deal records. This tutorial provides production-ready code for a complete S3 file upload system with React frontend, Express backend, and proper security.

Tech Stack & Prerequisites

  • Node.js v18+ with npm
  • Express.js 4.18+ for backend API
  • AWS Account with S3 access
  • AWS SDK for JavaScript v3 (@aws-sdk/client-s3 3.450+)
  • PostgreSQL 14+ or MongoDB for file metadata
  • React 18+ for frontend upload interface
  • dotenv for environment variables
  • multer 1.4+ for handling multipart uploads (alternative approach)
  • uuid 9.0+ for generating unique file identifiers
  • pg 8.11+ for PostgreSQL connection

Required AWS Setup:

  • IAM user with S3 permissions created
  • Access Key ID and Secret Access Key
  • S3 bucket created
  • Bucket CORS policy configured
  • Bucket policy allowing IAM user access

Required Database Setup:

  • PostgreSQL database with file_uploads table
  • Contact/Lead tables for file associations

Step-by-Step Implementation

Step 1: Setup

Initialize the project:

bash
mkdir crm-s3-upload
cd crm-s3-upload

# Backend setup
mkdir backend
cd backend
npm init -y
npm install express @aws-sdk/client-s3 @aws-sdk/s3-request-presigner pg dotenv cors uuid multer
npm install --save-dev nodemon

# Frontend setup
cd ..
npm create vite@latest frontend -- --template react
cd frontend
npm install
npm install axios
cd ..

Create backend structure:

bash
cd backend
mkdir src routes config db
touch src/server.js routes/upload.js config/aws.js db/database.js db/schema.sql .env .gitignore
```

Your structure should be:
```
crm-s3-upload/
├── backend/
│   ├── src/
│   │   └── server.js
│   ├── routes/
│   │   └── upload.js
│   ├── config/
│   │   └── aws.js
│   ├── db/
│   │   ├── database.js
│   │   └── schema.sql
│   ├── .env
│   ├── .gitignore
│   └── package.json
└── frontend/
    ├── src/
    │   ├── App.jsx
    │   └── components/
    │       └── FileUpload.jsx
    └── package.json

backend/db/schema.sql — Database schema:

sql
-- File uploads table
CREATE TABLE IF NOT EXISTS file_uploads (
    id SERIAL PRIMARY KEY,
    file_uuid UUID UNIQUE NOT NULL,
    original_filename VARCHAR(500) NOT NULL,
    s3_key VARCHAR(1000) NOT NULL,
    s3_bucket VARCHAR(255) NOT NULL,
    file_size BIGINT NOT NULL,
    mime_type VARCHAR(100),
    uploaded_by INTEGER, -- User ID from your CRM
    associated_contact_id INTEGER, -- Contact ID
    associated_deal_id INTEGER, -- Deal ID
    file_category VARCHAR(100), -- 'contract', 'invoice', 'image', etc.
    upload_status VARCHAR(50) DEFAULT 'pending', -- 'pending', 'completed', 'failed'
    s3_url TEXT,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    deleted_at TIMESTAMP
);

-- Indexes for performance
CREATE INDEX idx_file_uuid ON file_uploads(file_uuid);
CREATE INDEX idx_contact_id ON file_uploads(associated_contact_id);
CREATE INDEX idx_deal_id ON file_uploads(associated_deal_id);
CREATE INDEX idx_uploaded_by ON file_uploads(uploaded_by);
CREATE INDEX idx_upload_status ON file_uploads(upload_status);

-- File access logs (for audit trail)
CREATE TABLE IF NOT EXISTS file_access_logs (
    id SERIAL PRIMARY KEY,
    file_uuid UUID REFERENCES file_uploads(file_uuid),
    accessed_by INTEGER NOT NULL,
    access_type VARCHAR(50), -- 'download', 'view', 'delete'
    ip_address INET,
    accessed_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

Run schema:

bash
psql -U your_username -d your_database -f db/schema.sql

backend/package.json — Add scripts:

json
{
  "name": "crm-s3-backend",
  "version": "1.0.0",
  "type": "module",
  "scripts": {
    "start": "node src/server.js",
    "dev": "nodemon src/server.js"
  },
  "dependencies": {
    "@aws-sdk/client-s3": "^3.450.0",
    "@aws-sdk/s3-request-presigner": "^3.450.0",
    "cors": "^2.8.5",
    "dotenv": "^16.3.1",
    "express": "^4.18.2",
    "multer": "^1.4.5-lts.1",
    "pg": "^8.11.3",
    "uuid": "^9.0.1"
  },
  "devDependencies": {
    "nodemon": "^3.0.1"
  }
}

backend/.gitignore:

bash
echo "node_modules/
.env
*.log
uploads/" > .gitignore

Step 2: Configuration

backend/.env — Store AWS credentials securely:

env
# AWS S3 Configuration
AWS_REGION=us-east-1
AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
S3_BUCKET_NAME=your-crm-uploads-bucket
S3_FILE_PREFIX=crm-files/

# PostgreSQL Configuration
PG_HOST=localhost
PG_PORT=5432
PG_DATABASE=crm_database
PG_USER=your_username
PG_PASSWORD=your_password

# Server Configuration
PORT=5000
NODE_ENV=development
FRONTEND_URL=http://localhost:5173

# File Upload Limits
MAX_FILE_SIZE_MB=100
ALLOWED_MIME_TYPES=application/pdf,image/jpeg,image/png,application/msword,application/vnd.openxmlformats-officedocument.wordprocessingml.document

# Presigned URL expiration (seconds)
PRESIGNED_URL_EXPIRY=3600

How to create AWS S3 bucket and IAM user:

  1. Create S3 Bucket:
    • Go to AWS Console → S3
    • Click Create bucket
    • Bucket name: your-crm-uploads-bucket (must be globally unique)
    • Region: us-east-1
    • Block all public access: ENABLED
    • Versioning: Enabled (recommended)
    • Click Create bucket
  2. Configure CORS policy:
    • Select bucket → PermissionsCORS
    • Add this configuration:
json
[
  {
    "AllowedHeaders": ["*"],
    "AllowedMethods": ["GET", "PUT", "POST", "DELETE"],
    "AllowedOrigins": ["http://localhost:5173", "https://yourdomain.com"],
    "ExposeHeaders": ["ETag"]
  }
]
  1. Create IAM User:
    • Go to IAM → UsersCreate user
    • User name: crm-s3-uploader
    • Attach policy: AmazonS3FullAccess (or custom restrictive policy)
    • Create user and generate Access Key
    • Copy Access Key ID and Secret Access Key to .env

backend/config/aws.js — AWS S3 client configuration:

javascript
import { S3Client } from '@aws-sdk/client-s3';
import dotenv from 'dotenv';

dotenv.config();

// Create S3 client
export const s3Client = new S3Client({
  region: process.env.AWS_REGION,
  credentials: {
    accessKeyId: process.env.AWS_ACCESS_KEY_ID,
    secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
  },
});

export const S3_BUCKET_NAME = process.env.S3_BUCKET_NAME;
export const S3_FILE_PREFIX = process.env.S3_FILE_PREFIX || 'crm-files/';

// File upload configuration
export const uploadConfig = {
  maxFileSize: (process.env.MAX_FILE_SIZE_MB || 100) * 1024 * 1024, // Convert to bytes
  allowedMimeTypes: (process.env.ALLOWED_MIME_TYPES || '').split(',').filter(Boolean),
  presignedUrlExpiry: parseInt(process.env.PRESIGNED_URL_EXPIRY) || 3600,
};

export default s3Client;

backend/db/database.js — Database operations:

javascript
import pg from 'pg';
import dotenv from 'dotenv';

dotenv.config();

const { Pool } = pg;

const pool = new Pool({
  host: process.env.PG_HOST,
  port: process.env.PG_PORT,
  database: process.env.PG_DATABASE,
  user: process.env.PG_USER,
  password: process.env.PG_PASSWORD,
  max: 20,
});

// Test connection
export async function testConnection() {
  try {
    const client = await pool.connect();
    console.log('✓ PostgreSQL connected');
    client.release();
    return true;
  } catch (error) {
    console.error('✗ PostgreSQL connection failed:', error.message);
    return false;
  }
}

// Create file upload record
export async function createFileRecord(fileData) {
  const query = `
    INSERT INTO file_uploads (
      file_uuid, original_filename, s3_key, s3_bucket, file_size,
      mime_type, uploaded_by, associated_contact_id, associated_deal_id,
      file_category, upload_status, s3_url
    ) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12)
    RETURNING *
  `;

  const values = [
    fileData.fileUuid,
    fileData.originalFilename,
    fileData.s3Key,
    fileData.s3Bucket,
    fileData.fileSize,
    fileData.mimeType,
    fileData.uploadedBy,
    fileData.contactId || null,
    fileData.dealId || null,
    fileData.category || 'general',
    fileData.uploadStatus || 'pending',
    fileData.s3Url || null,
  ];

  const result = await pool.query(query, values);
  return result.rows[0];
}

// Update file upload status
export async function updateFileStatus(fileUuid, status, s3Url = null) {
  const query = `
    UPDATE file_uploads
    SET upload_status = $1, s3_url = $2, updated_at = CURRENT_TIMESTAMP
    WHERE file_uuid = $3
    RETURNING *
  `;

  const result = await pool.query(query, [status, s3Url, fileUuid]);
  return result.rows[0];
}

// Get file by UUID
export async function getFileByUuid(fileUuid) {
  const query = 'SELECT * FROM file_uploads WHERE file_uuid = $1 AND deleted_at IS NULL';
  const result = await pool.query(query, [fileUuid]);
  return result.rows[0];
}

// Get files by contact ID
export async function getFilesByContact(contactId) {
  const query = `
    SELECT * FROM file_uploads
    WHERE associated_contact_id = $1 AND deleted_at IS NULL
    ORDER BY created_at DESC
  `;
  const result = await pool.query(query, [contactId]);
  return result.rows;
}

// Soft delete file
export async function deleteFile(fileUuid, userId) {
  const query = `
    UPDATE file_uploads
    SET deleted_at = CURRENT_TIMESTAMP
    WHERE file_uuid = $1
    RETURNING *
  `;

  const result = await pool.query(query, [fileUuid]);

  // Log access
  await logFileAccess(fileUuid, userId, 'delete');

  return result.rows[0];
}

// Log file access
export async function logFileAccess(fileUuid, userId, accessType, ipAddress = null) {
  const query = `
    INSERT INTO file_access_logs (file_uuid, accessed_by, access_type, ip_address)
    VALUES ($1, $2, $3, $4)
  `;

  await pool.query(query, [fileUuid, userId, accessType, ipAddress]);
}

export default pool;

Step 3: Core Logic

routes/upload.js — File upload routes with presigned URLs:

javascript
import express from 'express';
import { v4 as uuidv4 } from 'uuid';
import { PutObjectCommand, GetObjectCommand, DeleteObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import multer from 'multer';
import path from 'path';
import { s3Client, S3_BUCKET_NAME, S3_FILE_PREFIX, uploadConfig } from '../config/aws.js';
import {
  createFileRecord,
  updateFileStatus,
  getFileByUuid,
  getFilesByContact,
  deleteFile,
  logFileAccess,
} from '../db/database.js';

const router = express.Router();

// Configure multer for memory storage
const upload = multer({
  storage: multer.memoryStorage(),
  limits: {
    fileSize: uploadConfig.maxFileSize,
  },
  fileFilter: (req, file, cb) => {
    if (uploadConfig.allowedMimeTypes.length === 0 || 
        uploadConfig.allowedMimeTypes.includes(file.mimetype)) {
      cb(null, true);
    } else {
      cb(new Error(`File type ${file.mimetype} not allowed`));
    }
  },
});

// POST /upload/presigned-url - Generate presigned URL for direct S3 upload
router.post('/presigned-url', async (req, res) => {
  try {
    const { filename, contentType, contactId, dealId, category, userId } = req.body;

    if (!filename || !contentType) {
      return res.status(400).json({ error: 'Filename and contentType required' });
    }

    // Validate content type
    if (uploadConfig.allowedMimeTypes.length > 0 && 
        !uploadConfig.allowedMimeTypes.includes(contentType)) {
      return res.status(400).json({ error: `Content type ${contentType} not allowed` });
    }

    // Generate unique file UUID
    const fileUuid = uuidv4();
    const fileExtension = path.extname(filename);
    const s3Key = `${S3_FILE_PREFIX}${fileUuid}${fileExtension}`;

    // Create presigned PUT URL
    const command = new PutObjectCommand({
      Bucket: S3_BUCKET_NAME,
      Key: s3Key,
      ContentType: contentType,
      Metadata: {
        'original-filename': filename,
        'uploaded-by': userId?.toString() || 'unknown',
        'contact-id': contactId?.toString() || '',
        'deal-id': dealId?.toString() || '',
      },
    });

    const presignedUrl = await getSignedUrl(s3Client, command, {
      expiresIn: uploadConfig.presignedUrlExpiry,
    });

    // Create database record
    const fileRecord = await createFileRecord({
      fileUuid,
      originalFilename: filename,
      s3Key,
      s3Bucket: S3_BUCKET_NAME,
      fileSize: 0, // Will be updated after upload
      mimeType: contentType,
      uploadedBy: userId,
      contactId,
      dealId,
      category,
      uploadStatus: 'pending',
    });

    console.log(`✓ Presigned URL generated for: ${filename}`);

    res.json({
      success: true,
      uploadUrl: presignedUrl,
      fileUuid,
      s3Key,
      expiresIn: uploadConfig.presignedUrlExpiry,
    });
  } catch (error) {
    console.error('Error generating presigned URL:', error);
    res.status(500).json({ error: error.message });
  }
});

// POST /upload/confirm - Confirm successful upload
router.post('/confirm', async (req, res) => {
  try {
    const { fileUuid, fileSize } = req.body;

    if (!fileUuid) {
      return res.status(400).json({ error: 'fileUuid required' });
    }

    const file = await getFileByUuid(fileUuid);

    if (!file) {
      return res.status(404).json({ error: 'File record not found' });
    }

    // Update file record with size and completed status
    const s3Url = `https://${S3_BUCKET_NAME}.s3.${process.env.AWS_REGION}.amazonaws.com/${file.s3_key}`;
    
    const updated = await updateFileStatus(fileUuid, 'completed', s3Url);

    // Update file size if provided
    if (fileSize) {
      await pool.query(
        'UPDATE file_uploads SET file_size = $1 WHERE file_uuid = $2',
        [fileSize, fileUuid]
      );
    }

    console.log(`✓ Upload confirmed: ${file.original_filename}`);

    res.json({
      success: true,
      file: updated,
    });
  } catch (error) {
    console.error('Error confirming upload:', error);
    res.status(500).json({ error: error.message });
  }
});

// POST /upload/direct - Direct upload via backend (alternative method)
router.post('/direct', upload.single('file'), async (req, res) => {
  try {
    if (!req.file) {
      return res.status(400).json({ error: 'No file uploaded' });
    }

    const { contactId, dealId, category, userId } = req.body;

    // Generate unique file UUID
    const fileUuid = uuidv4();
    const fileExtension = path.extname(req.file.originalname);
    const s3Key = `${S3_FILE_PREFIX}${fileUuid}${fileExtension}`;

    // Upload to S3
    const command = new PutObjectCommand({
      Bucket: S3_BUCKET_NAME,
      Key: s3Key,
      Body: req.file.buffer,
      ContentType: req.file.mimetype,
      Metadata: {
        'original-filename': req.file.originalname,
        'uploaded-by': userId?.toString() || 'unknown',
      },
    });

    await s3Client.send(command);

    const s3Url = `https://${S3_BUCKET_NAME}.s3.${process.env.AWS_REGION}.amazonaws.com/${s3Key}`;

    // Create database record
    const fileRecord = await createFileRecord({
      fileUuid,
      originalFilename: req.file.originalname,
      s3Key,
      s3Bucket: S3_BUCKET_NAME,
      fileSize: req.file.size,
      mimeType: req.file.mimetype,
      uploadedBy: userId,
      contactId,
      dealId,
      category,
      uploadStatus: 'completed',
      s3Url,
    });

    console.log(`✓ File uploaded: ${req.file.originalname}`);

    res.json({
      success: true,
      file: fileRecord,
    });
  } catch (error) {
    console.error('Error uploading file:', error);
    res.status(500).json({ error: error.message });
  }
});

// GET /upload/download/:fileUuid - Get presigned download URL
router.get('/download/:fileUuid', async (req, res) => {
  try {
    const { fileUuid } = req.params;
    const { userId } = req.query;

    const file = await getFileByUuid(fileUuid);

    if (!file) {
      return res.status(404).json({ error: 'File not found' });
    }

    // Generate presigned GET URL
    const command = new GetObjectCommand({
      Bucket: S3_BUCKET_NAME,
      Key: file.s3_key,
      ResponseContentDisposition: `attachment; filename="${file.original_filename}"`,
    });

    const downloadUrl = await getSignedUrl(s3Client, command, {
      expiresIn: 300, // 5 minutes
    });

    // Log access
    await logFileAccess(fileUuid, userId, 'download', req.ip);

    res.json({
      success: true,
      downloadUrl,
      filename: file.original_filename,
      expiresIn: 300,
    });
  } catch (error) {
    console.error('Error generating download URL:', error);
    res.status(500).json({ error: error.message });
  }
});

// GET /upload/contact/:contactId - Get files for contact
router.get('/contact/:contactId', async (req, res) => {
  try {
    const { contactId } = req.params;

    const files = await getFilesByContact(contactId);

    res.json({
      success: true,
      count: files.length,
      files,
    });
  } catch (error) {
    console.error('Error fetching contact files:', error);
    res.status(500).json({ error: error.message });
  }
});

// DELETE /upload/:fileUuid - Delete file
router.delete('/:fileUuid', async (req, res) => {
  try {
    const { fileUuid } = req.params;
    const { userId } = req.body;

    const file = await getFileByUuid(fileUuid);

    if (!file) {
      return res.status(404).json({ error: 'File not found' });
    }

    // Delete from S3
    const command = new DeleteObjectCommand({
      Bucket: S3_BUCKET_NAME,
      Key: file.s3_key,
    });

    await s3Client.send(command);

    // Soft delete from database
    await deleteFile(fileUuid, userId);

    console.log(`✓ File deleted: ${file.original_filename}`);

    res.json({
      success: true,
      message: 'File deleted successfully',
    });
  } catch (error) {
    console.error('Error deleting file:', error);
    res.status(500).json({ error: error.message });
  }
});

export default router;

src/server.js — Express server:

javascript
import express from 'express';
import cors from 'cors';
import dotenv from 'dotenv';
import { testConnection } from '../db/database.js';
import uploadRoutes from '../routes/upload.js';

dotenv.config();

const app = express();
const PORT = process.env.PORT || 5000;

// Middleware
app.use(cors({ origin: process.env.FRONTEND_URL }));
app.use(express.json());
app.use(express.urlencoded({ extended: true }));

// Health check
app.get('/health', (req, res) => {
  res.json({ status: 'healthy' });
});

// Upload routes
app.use('/api/upload', uploadRoutes);

// Start server
async function startServer() {
  try {
    await testConnection();

    app.listen(PORT, () => {
      console.log(`\n🚀 Server running on http://localhost:${PORT}`);
      console.log(`\nAPI Endpoints:`);
      console.log(`  POST   /api/upload/presigned-url - Get presigned upload URL`);
      console.log(`  POST   /api/upload/confirm - Confirm upload`);
      console.log(`  POST   /api/upload/direct - Direct upload`);
      console.log(`  GET    /api/upload/download/:fileUuid - Download file`);
      console.log(`  GET    /api/upload/contact/:contactId - Get contact files`);
      console.log(`  DELETE /api/upload/:fileUuid - Delete file\n`);
    });
  } catch (error) {
    console.error('Failed to start:', error);
    process.exit(1);
  }
}

startServer();

frontend/src/components/FileUpload.jsx — React upload component:

jsx
import { useState } from 'react';
import axios from 'axios';

const API_URL = import.meta.env.VITE_API_URL || 'http://localhost:5000';

export default function FileUpload({ contactId, userId }) {
  const [file, setFile] = useState(null);
  const [uploading, setUploading] = useState(false);
  const [progress, setProgress] = useState(0);
  const [uploadedFiles, setUploadedFiles] = useState([]);
  const [error, setError] = useState(null);

  // Handle file selection
  function handleFileChange(e) {
    const selected = e.target.files[0];
    if (selected) {
      // Check file size (100MB limit)
      if (selected.size > 100 * 1024 * 1024) {
        setError('File size exceeds 100MB limit');
        return;
      }
      setFile(selected);
      setError(null);
    }
  }

  // Upload using presigned URL
  async function handleUpload() {
    if (!file) return;

    try {
      setUploading(true);
      setProgress(0);
      setError(null);

      // Step 1: Get presigned URL
      const presignedResponse = await axios.post(`${API_URL}/api/upload/presigned-url`, {
        filename: file.name,
        contentType: file.type,
        contactId,
        userId,
        category: 'document',
      });

      const { uploadUrl, fileUuid } = presignedResponse.data;

      // Step 2: Upload directly to S3
      await axios.put(uploadUrl, file, {
        headers: {
          'Content-Type': file.type,
        },
        onUploadProgress: (progressEvent) => {
          const percentCompleted = Math.round(
            (progressEvent.loaded * 100) / progressEvent.total
          );
          setProgress(percentCompleted);
        },
      });

      // Step 3: Confirm upload
      await axios.post(`${API_URL}/api/upload/confirm`, {
        fileUuid,
        fileSize: file.size,
      });

      console.log('✓ Upload successful');
      setFile(null);
      setProgress(0);
      
      // Refresh file list
      loadFiles();
    } catch (err) {
      console.error('Upload error:', err);
      setError(err.response?.data?.error || err.message);
    } finally {
      setUploading(false);
    }
  }

  // Load files for contact
  async function loadFiles() {
    try {
      const response = await axios.get(`${API_URL}/api/upload/contact/${contactId}`);
      setUploadedFiles(response.data.files);
    } catch (err) {
      console.error('Error loading files:', err);
    }
  }

  // Download file
  async function handleDownload(fileUuid, filename) {
    try {
      const response = await axios.get(`${API_URL}/api/upload/download/${fileUuid}`, {
        params: { userId },
      });

      // Open presigned URL in new tab
      window.open(response.data.downloadUrl, '_blank');
    } catch (err) {
      console.error('Download error:', err);
      setError('Failed to download file');
    }
  }

  // Delete file
  async function handleDelete(fileUuid) {
    if (!confirm('Delete this file?')) return;

    try {
      await axios.delete(`${API_URL}/api/upload/${fileUuid}`, {
        data: { userId },
      });

      loadFiles();
    } catch (err) {
      console.error('Delete error:', err);
      setError('Failed to delete file');
    }
  }

  // Load files on mount
  useState(() => {
    if (contactId) loadFiles();
  }, [contactId]);

  return (
    <div className="max-w-2xl mx-auto p-6 bg-white rounded-lg shadow">
      <h2 className="text-2xl font-bold mb-6">File Upload</h2>

      {error && (
        <div className="mb-4 p-4 bg-red-50 text-red-700 rounded">
          {error}
        </div>
      )}

      {/* Upload Form */}
      <div className="mb-8">
        <input
          type="file"
          onChange={handleFileChange}
          disabled={uploading}
          className="mb-4 block w-full text-sm text-gray-500 file:mr-4 file:py-2 file:px-4 file:rounded file:border-0 file:text-sm file:font-semibold file:bg-blue-50 file:text-blue-700 hover:file:bg-blue-100"
        />

        {file && (
          <div className="mb-4">
            <p className="text-sm text-gray-600">
              Selected: {file.name} ({(file.size / 1024 / 1024).toFixed(2)} MB)
            </p>
          </div>
        )}

        {uploading && (
          <div className="mb-4">
            <div className="w-full bg-gray-200 rounded-full h-2">
              <div
                className="bg-blue-600 h-2 rounded-full transition-all"
                style={{ width: `${progress}%` }}
              />
            </div>
            <p className="text-sm text-gray-600 mt-2">{progress}% uploaded</p>
          </div>
        )}

        <button
          onClick={handleUpload}
          disabled={!file || uploading}
          className="px-6 py-2 bg-blue-600 text-white rounded hover:bg-blue-700 disabled:opacity-50 disabled:cursor-not-allowed"
        >
          {uploading ? 'Uploading...' : 'Upload File'}
        </button>
      </div>

      {/* File List */}
      <div>
        <h3 className="text-lg font-semibold mb-4">Uploaded Files ({uploadedFiles.length})</h3>

        {uploadedFiles.length === 0 ? (
          <p className="text-gray-500">No files uploaded yet</p>
        ) : (
          <div className="space-y-3">
            {uploadedFiles.map((file) => (
              <div
                key={file.file_uuid}
                className="flex items-center justify-between p-4 border rounded hover:bg-gray-50"
              >
                <div className="flex-1">
                  <p className="font-medium">{file.original_filename}</p>
                  <p className="text-sm text-gray-500">
                    {(file.file_size / 1024 / 1024).toFixed(2)} MB • 
                    Uploaded {new Date(file.created_at).toLocaleDateString()}
                  </p>
                </div>

                <div className="flex gap-2">
                  <button
                    onClick={() => handleDownload(file.file_uuid, file.original_filename)}
                    className="px-4 py-2 text-sm bg-blue-600 text-white rounded hover:bg-blue-700"
                  >
                    Download
                  </button>
                  <button
                    onClick={() => handleDelete(file.file_uuid)}
                    className="px-4 py-2 text-sm bg-red-600 text-white rounded hover:bg-red-700"
                  >
                    Delete
                  </button>
                </div>
              </div>
            ))}
          </div>
        )}
      </div>
    </div>
  );
}

Step 4: Testing

Test 1: Start Backend

bash
cd backend
npm run dev
```

Expected output:
```
✓ PostgreSQL connected
🚀 Server running on http://localhost:5000

API Endpoints:
  POST   /api/upload/presigned-url - Get presigned upload URL
  POST   /api/upload/confirm - Confirm upload
  ...

Test 2: Generate Presigned URL

bash
curl -X POST http://localhost:5000/api/upload/presigned-url \
  -H "Content-Type: application/json" \
  -d '{
    "filename": "test-document.pdf",
    "contentType": "application/pdf",
    "contactId": 123,
    "userId": 1,
    "category": "contract"
  }'

Expected response:

json
{
  "success": true,
  "uploadUrl": "https://your-bucket.s3.amazonaws.com/crm-files/uuid.pdf?X-Amz-Algorithm=...",
  "fileUuid": "550e8400-e29b-41d4-a716-446655440000",
  "s3Key": "crm-files/550e8400-e29b-41d4-a716-446655440000.pdf",
  "expiresIn": 3600
}

Test 3: Upload File to S3 Using Presigned URL

bash
# Use the uploadUrl from previous response
curl -X PUT "PRESIGNED_URL_HERE" \
  -H "Content-Type: application/pdf" \
  --upload-file /path/to/test.pdf

Test 4: Confirm Upload

bash
curl -X POST http://localhost:5000/api/upload/confirm \
  -H "Content-Type: application/json" \
  -d '{
    "fileUuid": "550e8400-e29b-41d4-a716-446655440000",
    "fileSize": 524288
  }'

Test 5: Verify in S3 Console

  1. Log into AWS Console → S3
  2. Navigate to your bucket
  3. Check crm-files/ folder
  4. Verify file appears with correct name

Test 6: Get Download URL

bash
curl "http://localhost:5000/api/upload/download/550e8400-e29b-41d4-a716-446655440000?userId=1"

Test 7: Check Database

bash
psql -U your_username -d crm_database
sql
SELECT file_uuid, original_filename, upload_status, file_size
FROM file_uploads
ORDER BY created_at DESC
LIMIT 5;

Test 8: Test Direct Upload Method

bash
curl -X POST http://localhost:5000/api/upload/direct \
  -F "file=@/path/to/test.pdf" \
  -F "contactId=123" \
  -F "userId=1" \
  -F "category=invoice"

Testing Checklist:

  • ✓ Backend server starts
  • ✓ Database connection succeeds
  • ✓ Presigned URL generates
  • ✓ File uploads to S3
  • ✓ Upload confirms in database
  • ✓ File visible in S3 console
  • ✓ Download URL generates
  • ✓ File downloads successfully
  • ✓ Files list by contact
  • ✓ File deletion works

Common Errors & Troubleshooting

Error 1: “SignatureDoesNotMatch” or “Access Denied” When Uploading

Problem: S3 upload fails with signature or access errors.

Solution: Multiple causes related to AWS configuration.

Cause 1 – Incorrect AWS credentials: Verify .env credentials match IAM user:

bash
# Test credentials with AWS CLI
aws s3 ls s3://your-crm-uploads-bucket --profile default

# If this fails, credentials are wrong

Cause 2 – Missing IAM permissions: IAM user needs S3 permissions. Create custom policy:

json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:DeleteObject",
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::your-crm-uploads-bucket",
        "arn:aws:s3:::your-crm-uploads-bucket/*"
      ]
    }
  ]
}

Attach to IAM user in AWS Console → IAM → Users → Permissions.

Cause 3 – CORS not configured: S3 bucket CORS must allow PUT requests. Verify in S3 Console → Bucket → Permissions → CORS:

json
[
  {
    "AllowedHeaders": ["*"],
    "AllowedMethods": ["GET", "PUT", "POST", "DELETE"],
    "AllowedOrigins": ["http://localhost:5173"],
    "ExposeHeaders": ["ETag"]
  }
]

Cause 4 – Wrong bucket region: Ensure .env region matches bucket:

env
AWS_REGION=us-east-1  # Must match bucket region

Cause 5 – Presigned URL expired: Default expiry is 1 hour. If testing slowly, URL expires. Regenerate:

javascript
// Increase expiry for testing
const presignedUrl = await getSignedUrl(s3Client, command, {
  expiresIn: 7200, // 2 hours
});

Error 2: “File size exceeds limit” or Upload Hangs

Problem: Large file uploads fail or never complete.

Solution: Configure multipart uploads for files >5MB.

For presigned uploads, use multipart:

javascript
import { CreateMultipartUploadCommand, UploadPartCommand, CompleteMultipartUploadCommand } from '@aws-sdk/client-s3';

// Backend: Generate multipart upload
router.post('/multipart/initiate', async (req, res) => {
  try {
    const { filename, contentType, contactId, userId } = req.body;

    const fileUuid = uuidv4();
    const s3Key = `${S3_FILE_PREFIX}${fileUuid}${path.extname(filename)}`;

    const command = new CreateMultipartUploadCommand({
      Bucket: S3_BUCKET_NAME,
      Key: s3Key,
      ContentType: contentType,
    });

    const response = await s3Client.send(command);

    // Save to database
    await createFileRecord({
      fileUuid,
      originalFilename: filename,
      s3Key,
      s3Bucket: S3_BUCKET_NAME,
      fileSize: 0,
      mimeType: contentType,
      uploadedBy: userId,
      contactId,
      uploadStatus: 'pending',
    });

    res.json({
      success: true,
      uploadId: response.UploadId,
      fileUuid,
      s3Key,
    });
  } catch (error) {
    res.status(500).json({ error: error.message });
  }
});

// Generate presigned URLs for each part
router.post('/multipart/part-url', async (req, res) => {
  const { s3Key, uploadId, partNumber } = req.body;

  const command = new UploadPartCommand({
    Bucket: S3_BUCKET_NAME,
    Key: s3Key,
    UploadId: uploadId,
    PartNumber: partNumber,
  });

  const presignedUrl = await getSignedUrl(s3Client, command, {
    expiresIn: 3600,
  });

  res.json({ presignedUrl });
});

Frontend chunking:

javascript
const CHUNK_SIZE = 5 * 1024 * 1024; // 5MB chunks

async function uploadLargeFile(file) {
  // 1. Initiate multipart upload
  const initResponse = await axios.post(`${API_URL}/api/upload/multipart/initiate`, {
    filename: file.name,
    contentType: file.type,
  });

  const { uploadId, s3Key } = initResponse.data;
  const parts = [];

  // 2. Upload parts
  const totalParts = Math.ceil(file.size / CHUNK_SIZE);

  for (let partNumber = 1; partNumber <= totalParts; partNumber++) {
    const start = (partNumber - 1) * CHUNK_SIZE;
    const end = Math.min(start + CHUNK_SIZE, file.size);
    const chunk = file.slice(start, end);

    // Get presigned URL for this part
    const urlResponse = await axios.post(`${API_URL}/api/upload/multipart/part-url`, {
      s3Key,
      uploadId,
      partNumber,
    });

    // Upload chunk
    const uploadResponse = await axios.put(urlResponse.data.presignedUrl, chunk);

    parts.push({
      ETag: uploadResponse.headers.etag,
      PartNumber: partNumber,
    });
  }

  // 3. Complete multipart upload
  await axios.post(`${API_URL}/api/upload/multipart/complete`, {
    s3Key,
    uploadId,
    parts,
  });
}

Also increase Express body parser limit:

javascript
app.use(express.json({ limit: '100mb' }));
app.use(express.urlencoded({ limit: '100mb', extended: true }));

Error 3: “CORS Error” in Browser Console

Problem: Frontend upload fails with CORS policy error.

Solution: S3 bucket CORS must allow frontend origin.

Check current CORS: AWS Console → S3 → Bucket → Permissions → CORS configuration

Fix CORS policy:

json
[
  {
    "AllowedHeaders": ["*"],
    "AllowedMethods": ["GET", "PUT", "POST", "DELETE", "HEAD"],
    "AllowedOrigins": [
      "http://localhost:5173",
      "http://localhost:3000",
      "https://yourdomain.com"
    ],
    "ExposeHeaders": ["ETag", "x-amz-request-id"],
    "MaxAgeSeconds": 3000
  }
]

For development, temporarily allow all origins (NOT for production):

json
{
  "AllowedOrigins": ["*"]
}
```

**Verify CORS headers in response:**
Use browser DevTools → Network → Click failed request → Check Response Headers for:
- `Access-Control-Allow-Origin`
- `Access-Control-Allow-Methods`

If missing, CORS not configured correctly in S3.

## Security Checklist

**Critical security practices for S3 file uploads:**

- **Block all public S3 access** — Never make bucket publicly readable. Use presigned URLs instead:
```
  AWS Console → S3 → Bucket → Permissions → Block public access → Block ALL
  • Use presigned URLs with short expiration — Presigned URLs should expire quickly:
javascript
  const presignedUrl = await getSignedUrl(s3Client, command, {
    expiresIn: 300, // 5 minutes for downloads, 1 hour for uploads
  });
  • Validate file types and sizes — Never trust client-side validation alone:
javascript
  const ALLOWED_TYPES = ['application/pdf', 'image/jpeg', 'image/png'];
  const MAX_SIZE = 100 * 1024 * 1024; // 100MB

  if (!ALLOWED_TYPES.includes(file.mimetype)) {
    throw new Error('File type not allowed');
  }

  if (file.size > MAX_SIZE) {
    throw new Error('File too large');
  }
  • Sanitize filenames — Prevent path traversal attacks:
javascript
  function sanitizeFilename(filename) {
    return filename
      .replace(/[^a-zA-Z0-9.-]/g, '_')
      .replace(/\.{2,}/g, '.')
      .slice(0, 255);
  }
  • Implement virus scanning — Use AWS Lambda with ClamAV:
javascript
  // Trigger Lambda on S3 upload event
  // Lambda scans file and quarantines if infected
  const scanCommand = new InvokeCommand({
    FunctionName: 'virus-scanner',
    Payload: JSON.stringify({ s3Key, bucket }),
  });
  • Use IAM roles instead of keys in production — For EC2/Lambda, use instance roles:
javascript
  // No credentials needed - uses EC2 instance role
  const s3Client = new S3Client({ region: 'us-east-1' });
  • Encrypt files at rest — Enable S3 server-side encryption:
javascript
  const command = new PutObjectCommand({
    Bucket: S3_BUCKET_NAME,
    Key: s3Key,
    Body: fileBuffer,
    ServerSideEncryption: 'AES256', // or 'aws:kms' for KMS
  });
  • Implement access control — Verify user has permission to access file:
javascript
  async function verifyFileAccess(fileUuid, userId) {
    const file = await getFileByUuid(fileUuid);
    
    // Check if user owns file or has permission
    if (file.uploaded_by !== userId && !userHasAdminRole(userId)) {
      throw new Error('Access denied');
    }
    
    return file;
  }
```

- **Enable S3 versioning** — Protect against accidental deletion:
```
  AWS ConsoleS3BucketPropertiesVersioningEnable
  • Log all file access — Maintain audit trail (already implemented in code):
javascript
  await logFileAccess(fileUuid, userId, 'download', req.ip);
  • Use S3 bucket policies — Restrict access by IP or VPC:
json
  {
    "Version": "2012-10-17",
    "Statement": [
      {
        "Effect": "Deny",
        "Principal": "*",
        "Action": "s3:*",
        "Resource": "arn:aws:s3:::bucket/*",
        "Condition": {
          "NotIpAddress": {
            "aws:SourceIp": ["203.0.113.0/24"]
          }
        }
      }
    ]
  }
```

- **Set up S3 lifecycle policies** — Auto-delete old files:
```
  AWS Console → S3 → Bucket → Management → Lifecycle rules
  → Delete objects after 365 days

Related Resources:

Need Help With Your File Storage System?

Building secure, scalable file upload systems requires expertise in AWS, authentication, and database design. If you need assistance implementing S3 integration, configuring CDN delivery, or setting up enterprise-grade document management, schedule a consultation. We’ll help you build a production-ready solution that scales with your business.

Leave a Comment

Your email address will not be published. Required fields are marked *