ChatGPT Integration with React, Angular, and .NET Applications

Tapesh Mehta Tapesh Mehta | Published on: Feb 17, 2026 | Est. reading time: 17 minutes
ChatGPT Integration with React, Angular, and .NET Applications

Artificial intelligence has revolutionized modern web development, and ChatGPT stands at the forefront of this transformation. Integrating ChatGPT with popular frameworks like React, Angular, and .NET enables developers to build intelligent, conversational applications that enhance user experiences. This comprehensive guide explores practical ChatGPT integration with React, Angular, and .NET applications, providing production-ready code examples and best practices.

Understanding ChatGPT API Fundamentals

Before diving into framework-specific implementations, understanding the ChatGPT API is crucial. OpenAI provides a REST API that accepts HTTP requests and returns AI-generated responses. The API requires authentication via API keys and supports various models including GPT-4, GPT-3.5-turbo, and specialized variants. Each request consumes tokens based on input and output length, making efficient prompt engineering essential for cost management.

The API accepts JSON payloads containing messages, model selection, temperature settings, and token limits. Responses arrive as JSON objects containing the generated text, token usage, and metadata. For real-time applications, the streaming endpoint allows progressive response rendering, significantly improving perceived performance in AI-first applications.

ChatGPT Integration with React Applications

React’s component-based architecture provides an ideal foundation for ChatGPT integration. Modern React development leverages hooks for state management and side effects, making API integration straightforward and maintainable.

Setting Up React ChatGPT Service

Creating a dedicated service layer separates concerns and enables reusability across components. This service handles API communication, error management, and response streaming.

// services/chatgptService.js
import axios from 'axios';

const OPENAI_API_KEY = process.env.REACT_APP_OPENAI_API_KEY;
const API_URL = 'https://api.openai.com/v1/chat/completions';

export class ChatGPTService {
  static async sendMessage(messages, options = {}) {
    try {
      const response = await axios.post(
        API_URL,
        {
          model: options.model || 'gpt-4',
          messages: messages,
          temperature: options.temperature || 0.7,
          max_tokens: options.maxTokens || 2000,
          stream: options.stream || false
        },
        {
          headers: {
            'Content-Type': 'application/json',
            'Authorization': `Bearer ${OPENAI_API_KEY}`
          }
        }
      );
      
      return response.data.choices[0].message;
    } catch (error) {
      console.error('ChatGPT API Error:', error);
      throw new Error(error.response?.data?.error?.message || 'Failed to get response');
    }
  }

  static async streamMessage(messages, onChunk, options = {}) {
    const response = await fetch(API_URL, {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'Authorization': `Bearer ${OPENAI_API_KEY}`
      },
      body: JSON.stringify({
        model: options.model || 'gpt-4',
        messages: messages,
        temperature: options.temperature || 0.7,
        max_tokens: options.maxTokens || 2000,
        stream: true
      })
    });

    const reader = response.body.getReader();
    const decoder = new TextDecoder();

    while (true) {
      const { done, value } = await reader.read();
      if (done) break;
      
      const chunk = decoder.decode(value);
      const lines = chunk.split('\n').filter(line => line.trim());
      
      for (const line of lines) {
        if (line.startsWith('data: ')) {
          const data = line.slice(6);
          if (data === '[DONE]') return;
          
          try {
            const parsed = JSON.parse(data);
            const content = parsed.choices[0]?.delta?.content;
            if (content) onChunk(content);
          } catch (e) {
            console.error('Parse error:', e);
          }
        }
      }
    }
  }
}

Building React Chat Component

A well-designed chat component manages conversation state, handles user input, and displays messages with proper formatting. This implementation demonstrates React best practices for state management and component composition.

// components/ChatGPTInterface.jsx
import React, { useState, useEffect, useRef } from 'react';
import { ChatGPTService } from '../services/chatgptService';
import './ChatGPTInterface.css';

const ChatGPTInterface = () => {
  const [messages, setMessages] = useState([]);
  const [input, setInput] = useState('');
  const [isLoading, setIsLoading] = useState(false);
  const [streamingMessage, setStreamingMessage] = useState('');
  const messagesEndRef = useRef(null);

  const scrollToBottom = () => {
    messagesEndRef.current?.scrollIntoView({ behavior: 'smooth' });
  };

  useEffect(() => {
    scrollToBottom();
  }, [messages, streamingMessage]);

  const handleSendMessage = async (e) => {
    e.preventDefault();
    if (!input.trim() || isLoading) return;

    const userMessage = { role: 'user', content: input };
    const newMessages = [...messages, userMessage];
    setMessages(newMessages);
    setInput('');
    setIsLoading(true);
    setStreamingMessage('');

    try {
      await ChatGPTService.streamMessage(
        newMessages,
        (chunk) => {
          setStreamingMessage(prev => prev + chunk);
        },
        { model: 'gpt-4' }
      );

      setMessages(prev => [
        ...prev,
        { role: 'assistant', content: streamingMessage }
      ]);
      setStreamingMessage('');
    } catch (error) {
      console.error('Error:', error);
      setMessages(prev => [
        ...prev,
        { role: 'assistant', content: 'Sorry, an error occurred. Please try again.' }
      ]);
    } finally {
      setIsLoading(false);
    }
  };

  return (
    <div className="chat-container">
      <div className="chat-header">
        <h2>ChatGPT Assistant</h2>
      </div>
      
      <div className="messages-container">
        {messages.map((msg, index) => (
          <div key={index} className={`message ${msg.role}`}>
            <div className="message-content">
              {msg.content}
            </div>
          </div>
        ))}
        
        {streamingMessage && (
          <div className="message assistant">
            <div className="message-content">
              {streamingMessage}
              <span className="cursor">|</span>
            </div>
          </div>
        )}
        
        <div ref={messagesEndRef} />
      </div>
      
      <form onSubmit={handleSendMessage} className="input-container">
        <input
          type="text"
          value={input}
          onChange={(e) => setInput(e.target.value)}
          placeholder="Type your message..."
          disabled={isLoading}
          className="message-input"
        />
        <button type="submit" disabled={isLoading} className="send-button">
          {isLoading ? 'Sending...' : 'Send'}
        </button>
      </form>
    </div>
  );
};

export default ChatGPTInterface;

ChatGPT Integration with Angular Applications

Angular’s robust framework provides powerful features for ChatGPT integration through services, dependency injection, and reactive programming with RxJS. Following Angular best practices ensures scalable and maintainable implementations.

Creating Angular ChatGPT Service

Angular services leverage dependency injection and HttpClient for efficient API communication. This service implements proper error handling and Observable patterns for reactive data flow.

// services/chatgpt.service.ts
import { Injectable } from '@angular/core';
import { HttpClient, HttpHeaders } from '@angular/common/httpClient';
import { Observable, Subject } from 'rxjs';
import { environment } from '../environments/environment';

export interface ChatMessage {
  role: 'user' | 'assistant' | 'system';
  content: string;
}

export interface ChatGPTResponse {
  id: string;
  choices: Array<{
    message: ChatMessage;
    finish_reason: string;
  }>;
  usage: {
    prompt_tokens: number;
    completion_tokens: number;
    total_tokens: number;
  };
}

@Injectable({
  providedIn: 'root'
})
export class ChatGPTService {
  private apiUrl = 'https://api.openai.com/v1/chat/completions';
  private apiKey = environment.openaiApiKey;

  constructor(private http: HttpClient) {}

  sendMessage(messages: ChatMessage[], options?: any): Observable<ChatGPTResponse> {
    const headers = new HttpHeaders({
      'Content-Type': 'application/json',
      'Authorization': `Bearer ${this.apiKey}`
    });

    const body = {
      model: options?.model || 'gpt-4',
      messages: messages,
      temperature: options?.temperature || 0.7,
      max_tokens: options?.maxTokens || 2000
    };

    return this.http.post<ChatGPTResponse>(this.apiUrl, body, { headers });
  }

  streamMessage(messages: ChatMessage[], options?: any): Observable<string> {
    const streamSubject = new Subject<string>();

    const headers = {
      'Content-Type': 'application/json',
      'Authorization': `Bearer ${this.apiKey}`
    };

    const body = {
      model: options?.model || 'gpt-4',
      messages: messages,
      temperature: options?.temperature || 0.7,
      max_tokens: options?.maxTokens || 2000,
      stream: true
    };

    fetch(this.apiUrl, {
      method: 'POST',
      headers: headers,
      body: JSON.stringify(body)
    })
    .then(response => {
      const reader = response.body?.getReader();
      const decoder = new TextDecoder();

      const readStream = () => {
        reader?.read().then(({ done, value }) => {
          if (done) {
            streamSubject.complete();
            return;
          }

          const chunk = decoder.decode(value);
          const lines = chunk.split('\n').filter(line => line.trim());

          for (const line of lines) {
            if (line.startsWith('data: ')) {
              const data = line.slice(6);
              if (data === '[DONE]') {
                streamSubject.complete();
                return;
              }

              try {
                const parsed = JSON.parse(data);
                const content = parsed.choices[0]?.delta?.content;
                if (content) {
                  streamSubject.next(content);
                }
              } catch (e) {
                console.error('Parse error:', e);
              }
            }
          }

          readStream();
        });
      };

      readStream();
    })
    .catch(error => {
      streamSubject.error(error);
    });

    return streamSubject.asObservable();
  }
}

Building Angular Chat Component

The Angular component leverages TypeScript’s strong typing and Angular’s template syntax for a robust chat interface. This implementation demonstrates proper subscription management and change detection.

// components/chat-interface/chat-interface.component.ts
import { Component, OnDestroy, ViewChild, ElementRef } from '@angular/core';
import { ChatGPTService, ChatMessage } from '../../services/chatgpt.service';
import { Subject } from 'rxjs';
import { takeUntil } from 'rxjs/operators';

@Component({
  selector: 'app-chat-interface',
  templateUrl: './chat-interface.component.html',
  styleUrls: ['./chat-interface.component.css']
})
export class ChatInterfaceComponent implements OnDestroy {
  @ViewChild('messagesContainer') messagesContainer!: ElementRef;
  
  messages: ChatMessage[] = [];
  userInput: string = '';
  isLoading: boolean = false;
  streamingMessage: string = '';
  private destroy$ = new Subject<void>();

  constructor(private chatGPTService: ChatGPTService) {}

  sendMessage(): void {
    if (!this.userInput.trim() || this.isLoading) return;

    const userMessage: ChatMessage = {
      role: 'user',
      content: this.userInput
    };

    this.messages.push(userMessage);
    this.userInput = '';
    this.isLoading = true;
    this.streamingMessage = '';

    this.chatGPTService.streamMessage(this.messages)
      .pipe(takeUntil(this.destroy$))
      .subscribe({
        next: (chunk) => {
          this.streamingMessage += chunk;
          this.scrollToBottom();
        },
        complete: () => {
          this.messages.push({
            role: 'assistant',
            content: this.streamingMessage
          });
          this.streamingMessage = '';
          this.isLoading = false;
          this.scrollToBottom();
        },
        error: (error) => {
          console.error('Error:', error);
          this.messages.push({
            role: 'assistant',
            content: 'Sorry, an error occurred. Please try again.'
          });
          this.isLoading = false;
        }
      });
  }

  private scrollToBottom(): void {
    setTimeout(() => {
      if (this.messagesContainer) {
        this.messagesContainer.nativeElement.scrollTop = 
          this.messagesContainer.nativeElement.scrollHeight;
      }
    }, 100);
  }

  ngOnDestroy(): void {
    this.destroy$.next();
    this.destroy$.complete();
  }
}

.NET Backend Integration for ChatGPT

Implementing ChatGPT integration in .NET applications provides server-side control over API keys, rate limiting, and advanced features like conversation persistence and prompt engineering. This approach is particularly valuable when integrating AI into .NET applications.

Creating .NET ChatGPT Service

ASP.NET Core provides excellent support for HTTP client management through IHttpClientFactory. This service implementation demonstrates proper async/await patterns and error handling.

// Services/ChatGPTService.cs
using System.Net.Http.Headers;
using System.Text;
using System.Text.Json;
using Microsoft.Extensions.Configuration;

public interface IChatGPTService
{
    Task<string> SendMessageAsync(List<ChatMessage> messages, CancellationToken cancellationToken = default);
    IAsyncEnumerable<string> StreamMessageAsync(List<ChatMessage> messages, CancellationToken cancellationToken = default);
}

public class ChatMessage
{
    public string Role { get; set; } = string.Empty;
    public string Content { get; set; } = string.Empty;
}

public class ChatGPTService : IChatGPTService
{
    private readonly HttpClient _httpClient;
    private readonly string _apiKey;
    private readonly string _model;
    private readonly ILogger<ChatGPTService> _logger;

    public ChatGPTService(
        IHttpClientFactory httpClientFactory,
        IConfiguration configuration,
        ILogger<ChatGPTService> logger)
    {
        _httpClient = httpClientFactory.CreateClient("OpenAI");
        _apiKey = configuration["OpenAI:ApiKey"] ?? throw new ArgumentNullException("OpenAI API Key");
        _model = configuration["OpenAI:Model"] ?? "gpt-4";
        _logger = logger;
        
        _httpClient.DefaultRequestHeaders.Authorization = 
            new AuthenticationHeaderValue("Bearer", _apiKey);
    }

    public async Task<string> SendMessageAsync(
        List<ChatMessage> messages,
        CancellationToken cancellationToken = default)
    {
        try
        {
            var requestBody = new
            {
                model = _model,
                messages = messages.Select(m => new { role = m.Role, content = m.Content }),
                temperature = 0.7,
                max_tokens = 2000
            };

            var content = new StringContent(
                JsonSerializer.Serialize(requestBody),
                Encoding.UTF8,
                "application/json");

            var response = await _httpClient.PostAsync(
                "https://api.openai.com/v1/chat/completions",
                content,
                cancellationToken);

            response.EnsureSuccessStatusCode();

            var responseContent = await response.Content.ReadAsStringAsync(cancellationToken);
            using var doc = JsonDocument.Parse(responseContent);
            
            return doc.RootElement
                .GetProperty("choices")[0]
                .GetProperty("message")
                .GetProperty("content")
                .GetString() ?? string.Empty;
        }
        catch (Exception ex)
        {
            _logger.LogError(ex, "Error calling ChatGPT API");
            throw;
        }
    }

    public async IAsyncEnumerable<string> StreamMessageAsync(
        List<ChatMessage> messages,
        [EnumeratorCancellation] CancellationToken cancellationToken = default)
    {
        var requestBody = new
        {
            model = _model,
            messages = messages.Select(m => new { role = m.Role, content = m.Content }),
            temperature = 0.7,
            max_tokens = 2000,
            stream = true
        };

        var content = new StringContent(
            JsonSerializer.Serialize(requestBody),
            Encoding.UTF8,
            "application/json");

        var request = new HttpRequestMessage(HttpMethod.Post, "https://api.openai.com/v1/chat/completions")
        {
            Content = content
        };

        using var response = await _httpClient.SendAsync(
            request,
            HttpCompletionOption.ResponseHeadersRead,
            cancellationToken);

        response.EnsureSuccessStatusCode();

        await using var stream = await response.Content.ReadAsStreamAsync(cancellationToken);
        using var reader = new StreamReader(stream);

        while (!reader.EndOfStream && !cancellationToken.IsCancellationRequested)
        {
            var line = await reader.ReadLineAsync();
            if (string.IsNullOrWhiteSpace(line)) continue;
            if (!line.StartsWith("data: ")) continue;

            var data = line[6..];
            if (data == "[DONE]") break;

            try
            {
                using var doc = JsonDocument.Parse(data);
                var delta = doc.RootElement
                    .GetProperty("choices")[0]
                    .GetProperty("delta");

                if (delta.TryGetProperty("content", out var contentElement))
                {
                    var chunk = contentElement.GetString();
                    if (!string.IsNullOrEmpty(chunk))
                    {
                        yield return chunk;
                    }
                }
            }
            catch (JsonException ex)
            {
                _logger.LogWarning(ex, "Failed to parse streaming response chunk");
            }
        }
    }
}

Creating .NET API Controller

The API controller exposes ChatGPT functionality to frontend applications through RESTful endpoints. This implementation includes proper validation, error handling, and supports both standard and streaming responses.

// Controllers/ChatController.cs
using Microsoft.AspNetCore.Mvc;
using System.Text;
using System.Text.Json;

[ApiController]
[Route("api/[controller]")]
public class ChatController : ControllerBase
{
    private readonly IChatGPTService _chatGPTService;
    private readonly ILogger<ChatController> _logger;

    public ChatController(
        IChatGPTService chatGPTService,
        ILogger<ChatController> logger)
    {
        _chatGPTService = chatGPTService;
        _logger = logger;
    }

    [HttpPost("send")]
    public async Task<IActionResult> SendMessage(
        [FromBody] ChatRequest request,
        CancellationToken cancellationToken)
    {
        if (request.Messages == null || !request.Messages.Any())
        {
            return BadRequest("Messages cannot be empty");
        }

        try
        {
            var response = await _chatGPTService.SendMessageAsync(
                request.Messages,
                cancellationToken);

            return Ok(new { message = response });
        }
        catch (Exception ex)
        {
            _logger.LogError(ex, "Error processing chat request");
            return StatusCode(500, "An error occurred while processing your request");
        }
    }

    [HttpPost("stream")]
    public async Task StreamMessage(
        [FromBody] ChatRequest request,
        CancellationToken cancellationToken)
    {
        if (request.Messages == null || !request.Messages.Any())
        {
            Response.StatusCode = 400;
            await Response.WriteAsync("Messages cannot be empty", cancellationToken);
            return;
        }

        Response.ContentType = "text/event-stream";
        Response.Headers.Add("Cache-Control", "no-cache");
        Response.Headers.Add("Connection", "keep-alive");

        try
        {
            await foreach (var chunk in _chatGPTService.StreamMessageAsync(
                request.Messages,
                cancellationToken))
            {
                var data = $"data: {JsonSerializer.Serialize(new { content = chunk })}\n\n";
                await Response.WriteAsync(data, cancellationToken);
                await Response.Body.FlushAsync(cancellationToken);
            }

            await Response.WriteAsync("data: [DONE]\n\n", cancellationToken);
        }
        catch (Exception ex)
        {
            _logger.LogError(ex, "Error streaming chat response");
            await Response.WriteAsync(
                $"data: {JsonSerializer.Serialize(new { error = "An error occurred" })}\n\n",
                cancellationToken);
        }
    }
}

public class ChatRequest
{
    public List<ChatMessage> Messages { get; set; } = new();
}

Advanced ChatGPT Integration Patterns

Beyond basic implementation, production applications require sophisticated patterns for conversation management, context preservation, and error handling. These patterns ensure robust and scalable ChatGPT integration across React, Angular, and .NET applications.

Conversation Context Management

Maintaining conversation context is crucial for coherent multi-turn interactions. Implementing context windows with token management prevents API limits while preserving relevant conversation history.

// Services/ConversationManager.cs
public class ConversationManager
{
    private const int MaxTokens = 4000;
    private const int AverageTokensPerMessage = 100;
    private readonly IMemoryCache _cache;

    public ConversationManager(IMemoryCache cache)
    {
        _cache = cache;
    }

    public List<ChatMessage> GetConversationContext(string conversationId)
    {
        var key = $"conversation_{conversationId}";
        return _cache.GetOrCreate(key, entry =>
        {
            entry.AbsoluteExpirationRelativeToNow = TimeSpan.FromHours(1);
            return new List<ChatMessage>();
        }) ?? new List<ChatMessage>();
    }

    public void AddMessage(string conversationId, ChatMessage message)
    {
        var messages = GetConversationContext(conversationId);
        messages.Add(message);

        // Trim old messages if exceeding token limit
        var maxMessages = MaxTokens / AverageTokensPerMessage;
        if (messages.Count > maxMessages)
        {
            messages.RemoveRange(0, messages.Count - maxMessages);
        }

        UpdateCache(conversationId, messages);
    }

    private void UpdateCache(string conversationId, List<ChatMessage> messages)
    {
        var key = $"conversation_{conversationId}";
        _cache.Set(key, messages, TimeSpan.FromHours(1));
    }
}

Rate Limiting and Cost Control

Implementing rate limiting protects against excessive API costs and ensures fair usage across users. This middleware approach provides flexible rate limiting for .NET applications.

// Middleware/RateLimitingMiddleware.cs
using Microsoft.AspNetCore.Http;
using System.Collections.Concurrent;

public class RateLimitingMiddleware
{
    private readonly RequestDelegate _next;
    private static readonly ConcurrentDictionary<string, TokenBucket> _buckets = new();
    private const int RequestsPerMinute = 20;

    public RateLimitingMiddleware(RequestDelegate next)
    {
        _next = next;
    }

    public async Task InvokeAsync(HttpContext context)
    {
        var identifier = context.Connection.RemoteIpAddress?.ToString() ?? "unknown";
        var bucket = _buckets.GetOrAdd(identifier, _ => new TokenBucket(RequestsPerMinute));

        if (!bucket.TryConsume())
        {
            context.Response.StatusCode = 429;
            await context.Response.WriteAsync("Rate limit exceeded. Please try again later.");
            return;
        }

        await _next(context);
    }
}

public class TokenBucket
{
    private readonly int _capacity;
    private int _tokens;
    private DateTime _lastRefill;
    private readonly object _lock = new();

    public TokenBucket(int capacity)
    {
        _capacity = capacity;
        _tokens = capacity;
        _lastRefill = DateTime.UtcNow;
    }

    public bool TryConsume()
    {
        lock (_lock)
        {
            Refill();
            if (_tokens > 0)
            {
                _tokens--;
                return true;
            }
            return false;
        }
    }

    private void Refill()
    {
        var now = DateTime.UtcNow;
        var timePassed = now - _lastRefill;
        
        if (timePassed.TotalMinutes >= 1)
        {
            _tokens = _capacity;
            _lastRefill = now;
        }
    }
}

Security Best Practices for ChatGPT Integration

Security is paramount when integrating third-party AI services. Proper API key management, input validation, and content filtering protect both your application and users.

API Key Protection

Never expose API keys in frontend code. Always proxy requests through your backend where keys can be securely stored in environment variables or Azure Key Vault.

// appsettings.json (Development)
{
  "OpenAI": {
    "ApiKey": "your-api-key-here",
    "Model": "gpt-4"
  }
}

// For production, use environment variables or Azure Key Vault
// Environment variable: OPENAI__APIKEY
// Key Vault reference: @Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/OpenAI-ApiKey/)

Input Validation and Sanitization

Validate and sanitize all user inputs before sending to ChatGPT. This prevents prompt injection attacks and ensures appropriate content.

// Services/InputValidator.cs
using System.Text.RegularExpressions;

public class InputValidator
{
    private const int MaxInputLength = 4000;
    private static readonly string[] ProhibitedPatterns = new[]
    {
        @"ignore previous instructions",
        @"disregard all above",
        @"forget everything"
    };

    public static (bool IsValid, string ErrorMessage) ValidateInput(string input)
    {
        if (string.IsNullOrWhiteSpace(input))
        {
            return (false, "Input cannot be empty");
        }

        if (input.Length > MaxInputLength)
        {
            return (false, $"Input exceeds maximum length of {MaxInputLength} characters");
        }

        foreach (var pattern in ProhibitedPatterns)
        {
            if (Regex.IsMatch(input, pattern, RegexOptions.IgnoreCase))
            {
                return (false, "Input contains prohibited content");
            }
        }

        return (true, string.Empty);
    }

    public static string SanitizeInput(string input)
    {
        // Remove potential code injection patterns
        input = Regex.Replace(input, @"<script[^>]*>.*?</script>", "", RegexOptions.IgnoreCase);
        input = Regex.Replace(input, @"<iframe[^>]*>.*?</iframe>", "", RegexOptions.IgnoreCase);
        
        return input.Trim();
    }
}

Performance Optimization Strategies

Optimizing ChatGPT integration ensures responsive user experiences and cost efficiency. These strategies apply across React, Angular, and .NET implementations.

Response Caching

Caching common queries reduces API calls and improves response times. Implement intelligent caching that considers conversation context and user-specific parameters.

// Services/CachedChatGPTService.cs
public class CachedChatGPTService : IChatGPTService
{
    private readonly IChatGPTService _innerService;
    private readonly IDistributedCache _cache;
    private readonly ILogger<CachedChatGPTService> _logger;
    private const int CacheExpirationMinutes = 60;

    public CachedChatGPTService(
        IChatGPTService innerService,
        IDistributedCache cache,
        ILogger<CachedChatGPTService> logger)
    {
        _innerService = innerService;
        _cache = cache;
        _logger = logger;
    }

    public async Task<string> SendMessageAsync(
        List<ChatMessage> messages,
        CancellationToken cancellationToken = default)
    {
        var cacheKey = GenerateCacheKey(messages);
        var cachedResponse = await _cache.GetStringAsync(cacheKey, cancellationToken);

        if (!string.IsNullOrEmpty(cachedResponse))
        {
            _logger.LogInformation("Cache hit for key: {CacheKey}", cacheKey);
            return cachedResponse;
        }

        var response = await _innerService.SendMessageAsync(messages, cancellationToken);

        var options = new DistributedCacheEntryOptions
        {
            AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(CacheExpirationMinutes)
        };

        await _cache.SetStringAsync(cacheKey, response, options, cancellationToken);
        return response;
    }

    private static string GenerateCacheKey(List<ChatMessage> messages)
    {
        var content = string.Join("|", messages.Select(m => $"{m.Role}:{m.Content}"));
        using var sha256 = SHA256.Create();
        var hash = sha256.ComputeHash(Encoding.UTF8.GetBytes(content));
        return $"chatgpt_{Convert.ToBase64String(hash)}";
    }

    public IAsyncEnumerable<string> StreamMessageAsync(
        List<ChatMessage> messages,
        CancellationToken cancellationToken = default)
    {
        // Streaming responses are not cached
        return _innerService.StreamMessageAsync(messages, cancellationToken);
    }
}

Testing ChatGPT Integration

Comprehensive testing ensures reliable ChatGPT integration. Implement unit tests, integration tests, and mock services for development and testing environments.

// Tests/ChatGPTServiceTests.cs
using Xunit;
using Moq;
using Moq.Protected;

public class ChatGPTServiceTests
{
    [Fact]
    public async Task SendMessageAsync_ValidRequest_ReturnsResponse()
    {
        // Arrange
        var mockResponse = new
        {
            choices = new[]
            {
                new
                {
                    message = new
                    {
                        role = "assistant",
                        content = "Test response"
                    }
                }
            }
        };

        var handlerMock = new Mock<HttpMessageHandler>();
        handlerMock.Protected()
            .Setup<Task<HttpResponseMessage>>(
                "SendAsync",
                ItExpr.IsAny<HttpRequestMessage>(),
                ItExpr.IsAny<CancellationToken>())
            .ReturnsAsync(new HttpResponseMessage
            {
                StatusCode = HttpStatusCode.OK,
                Content = new StringContent(JsonSerializer.Serialize(mockResponse))
            });

        var httpClient = new HttpClient(handlerMock.Object);
        var mockFactory = new Mock<IHttpClientFactory>();
        mockFactory.Setup(f => f.CreateClient("OpenAI")).Returns(httpClient);

        var mockConfig = new Mock<IConfiguration>();
        mockConfig.Setup(c => c["OpenAI:ApiKey"]).Returns("test-key");
        mockConfig.Setup(c => c["OpenAI:Model"]).Returns("gpt-4");

        var mockLogger = new Mock<ILogger<ChatGPTService>>();

        var service = new ChatGPTService(mockFactory.Object, mockConfig.Object, mockLogger.Object);

        var messages = new List<ChatMessage>
        {
            new ChatMessage { Role = "user", Content = "Test message" }
        };

        // Act
        var result = await service.SendMessageAsync(messages);

        // Assert
        Assert.Equal("Test response", result);
    }
}

Deployment Considerations

Deploying ChatGPT-integrated applications requires careful planning for scalability, monitoring, and cost management. Consider using Azure App Service for seamless deployment and scaling of your .NET backend alongside React or Angular frontends.

Monitoring and Logging

Implement comprehensive logging and monitoring to track API usage, response times, and errors. This enables proactive issue resolution and cost optimization.

// Configure Application Insights in Program.cs
builder.Services.AddApplicationInsightsTelemetry();

// Add custom telemetry for ChatGPT requests
public class ChatGPTTelemetry
{
    private readonly TelemetryClient _telemetryClient;

    public ChatGPTTelemetry(TelemetryClient telemetryClient)
    {
        _telemetryClient = telemetryClient;
    }

    public void TrackChatGPTRequest(
        string conversationId,
        int messageCount,
        double responseTime,
        bool isSuccess)
    {
        var properties = new Dictionary<string, string>
        {
            { "ConversationId", conversationId },
            { "MessageCount", messageCount.ToString() },
            { "IsSuccess", isSuccess.ToString() }
        };

        var metrics = new Dictionary<string, double>
        {
            { "ResponseTime", responseTime }
        };

        _telemetryClient.TrackEvent("ChatGPTRequest", properties, metrics);
    }
}

Conclusion

Integrating ChatGPT with React, Angular, and .NET applications opens powerful possibilities for creating intelligent, conversational user experiences. This comprehensive guide covered essential implementation patterns, from basic API integration to advanced features like streaming responses, conversation management, and security best practices.

Key takeaways include the importance of server-side API key management, implementing proper error handling and rate limiting, optimizing performance through caching and streaming, and comprehensive testing strategies. Whether building with React’s component-based architecture, Angular’s robust framework, or .NET’s powerful backend capabilities, these patterns provide a solid foundation for production-ready ChatGPT integration.

As AI continues to evolve, staying current with AI integration best practices and leveraging the strengths of each technology stack will be crucial for building competitive, intelligent applications. Start with these foundational implementations and expand based on your specific requirements and user needs.

Looking to implement ChatGPT in your applications? WireFuture specializes in web development with expertise in React, Angular, and .NET technologies. Contact us at +91-9925192180 to discuss your AI integration project.

Share

clutch profile designrush wirefuture profile goodfirms wirefuture profile
Your Software Dreams, Realized! 💭

Dream big, because at WireFuture, no vision is too ambitious. Our team is passionate about turning your software dreams into reality, with custom solutions that exceed expectations.

Hire Now

Categories
.NET Development Angular Development JavaScript Development KnockoutJS Development NodeJS Development PHP Development Python Development React Development Software Development SQL Server Development VueJS Development All
About Author
wirefuture - founder

Tapesh Mehta

verified Verified
Expert in Software Development

Tapesh Mehta is a seasoned tech worker who has been making apps for the web, mobile devices, and desktop for over 15+ years. Tapesh knows a lot of different computer languages and frameworks. For robust web solutions, he is an expert in Asp.Net, PHP, and Python. He is also very good at making hybrid mobile apps, which use Ionic, Xamarin, and Flutter to make cross-platform user experiences that work well together. In addition, Tapesh has a lot of experience making complex desktop apps with WPF, which shows how flexible and creative he is when it comes to making software. His work is marked by a constant desire to learn and change.

Get in Touch
Your Ideas, Our Strategy – Let's Connect.

No commitment required. Whether you’re a charity, business, start-up or you just have an idea – we’re happy to talk through your project.

Embrace a worry-free experience as we proactively update, secure, and optimize your software, enabling you to focus on what matters most – driving innovation and achieving your business goals.

Hire Your A-Team Here to Unlock Potential & Drive Results
You can send an email to contact@wirefuture.com
clutch wirefuture profile designrush wirefuture profile goodfirms wirefuture profile good firms award-4 award-5 award-6