Debugging JSON API Responses: A Field Guide for 2026

How professional developers bisect 'works in Postman, breaks in production' JSON bugs in under five minutes. Encoding pitfalls, BigInt precision loss, BOM bytes, NaN handling, broken Content-Type headers, partial response truncation — every gotcha and the exact diagnostic step to catch it.

Y
Yamini·Content & UX
·11 min read·Debugging

Every backend developer has lived this loop: you curl the endpoint, it returns something that looks like JSON, you paste it into your tool, and the parser explodes. The error message says 'Unexpected token at position 247' and you have no idea what that means. Here's what actually goes wrong and how to bisect each failure mode in under a minute. This is the playbook our team uses internally and has saved more hours of debugging than any single piece of tooling.

Step 1: Verify it's actually JSON

Before parsing, look at the raw bytes. The fastest way is `curl -s URL | head -c 500` to dump the first 500 characters to your terminal. What you see tells you a lot.

Patterns that mean it's not JSON

  • Starts with <html> or <!DOCTYPE — the server returned an HTML error page. Common cause: an auth redirect to a login page, a 500 with a stack-trace HTML page, or hitting a CDN edge error.
  • Starts with (...) wrapping JSON — JSONP. The wrapper is a function-call syntax used to bypass same-origin policy in old browsers. Strip everything before the opening { or [.
  • Starts with )]}',\n — Google's XSSI prevention prefix. Used by some Google APIs to make the response invalid JavaScript when loaded as a <script>. Strip the first six characters before parsing.
  • Starts with \xFF\xFE or similar — UTF-16 byte-order mark. Less common but happens with .NET endpoints. Re-encode to UTF-8 before parsing.
  • Looks like JSON but has trailing garbage like binary nulls — usually a partial response or a streaming endpoint you've truncated. Check Content-Length vs actual bytes received.

Once you know what you're dealing with, you can strip the wrapper and try parsing the rest. Most modern JSON formatters skip these wrappers automatically, but it's worth knowing what's happening.

Step 2: Check the encoding

JSON is supposed to be UTF-8. RFC 8259 says so explicitly. But not every server obeys, and the wrong encoding will produce subtle bugs rather than immediate failures.

The Content-Type header

Run `curl -I URL` to see the response headers. The Content-Type should be exactly `application/json; charset=utf-8`. If it says `text/plain` or `text/html`, the server is mis-labeling the response — browsers may sniff the type, but stricter clients will refuse. If charset is missing or set to anything other than utf-8, you might get mojibake (é shown as é) or decoder errors on non-ASCII characters.

Detecting BOM bytes

Some servers (especially .NET on Windows) prefix UTF-8 responses with a Byte-Order Mark — three bytes (0xEF, 0xBB, 0xBF) at the very start of the file. These bytes are invisible in most editors but JSON parsers reject them. The fix in JavaScript: `JSON.parse(raw.replace(/^\uFEFF/, ''))`. Our formatter strips BOM automatically before parsing.

Double-encoded UTF-8

Happens when text is UTF-8 encoded, then accidentally interpreted as Latin-1 and re-encoded as UTF-8. The result is mojibake even though both encodings are valid. If you see characters like é instead of é, the data was decoded twice. The fix is at the source — find the system that's re-encoding and stop it.

Step 3: Paste into a formatter that points to errors

Don't open it in a text editor and try to count brackets manually. Paste it into a formatter that reports line/column on parse errors and lets you jump to the broken character. The YaliKit JSON Formatter does both — it shows the exact line and column, highlights the offending line in the gutter, and has a Jump button that scrolls and selects the bad character.

If the formatter offers an auto-fix banner (because the input is JS-object-style rather than strict JSON), that's a tell that the API is being lenient with its own output — possibly a sign of a broken serialization layer on the server side. Worth reporting upstream.

Step 4: Compare against the expected shape

Got JSON but the wrong shape? Validate it against a schema. If you don't have one, generate one from a known-good response using a JSON Schema Generator, then run the broken response through the validator. The validator will tell you exactly which field is missing, which type changed, which enum value is unrecognized.

If the broken response is a regression — it used to work and now doesn't — and you have logs of the old response, diff the two with our JSON Compare tool. The structural diff highlights only fields that actually changed, ignoring whitespace and key order.

Common JSON gotchas and their fixes

Numeric IDs over 2^53

JavaScript numbers are 64-bit floats and can only precisely represent integers up to 9,007,199,254,740,992. Twitter, Discord, and other services that use snowflake IDs send numbers larger than this. JSON.parse() will read them as approximations, silently corrupting your IDs. The fix: ask the API to serialize them as strings, or use a library that preserves them (like `json-bigint` in Node, or `simdjson` for raw bigint support). When testing, look for IDs ending in 0 or 1 after parse — that's a strong signal of precision loss.

Infinity and NaN literals

Not valid JSON. JSON has no representation for IEEE-754 infinity or not-a-number values. Some servers return them anyway as bare tokens: { 'score': Infinity }. JSON.parse will reject. Fix: do a regex preprocessing step before parsing — replace /:\s*Infinity\b/g with ':null' (or whatever sentinel makes sense for your domain). Better fix: tell the upstream team to serialize them as strings or null.

Unicode escapes

JSON's spec supports \uXXXX escapes for any BMP character. For characters above U+FFFF (most emoji, less common scripts), you need surrogate pairs: 😀 for 😀. Some serializers get this wrong and emit raw bytes instead of escapes, which can break parsers expecting strict encoding. If you see unexpected character errors in the middle of what looks like valid text, suspect a malformed surrogate pair.

Trailing whitespace and null bytes

Common with streaming endpoints (server-sent events, chunked transfer encoding). The server sends valid JSON followed by extra bytes — sometimes a heartbeat ping, sometimes nulls used as padding. Parsers may or may not tolerate this. Trim before parsing: `raw.trim()` removes whitespace; `raw.replace(/\x00+$/, '')` strips trailing nulls.

Mixed quote styles

JSON requires double quotes around strings. Some hand-written test data uses single quotes, and some buggy serializers will emit single quotes inadvertently. Strict parsers reject; lenient parsers (like our auto-fix preprocessor) accept and convert. Either way, fix at the source.

Trailing commas

The most common JSON error in the wild. JavaScript object literals allow them; JSON does not. Sources include hand-written JSON, JS5/JSONC files that weren't converted, and APIs that use a loose serializer. Auto-fix is trivial — just delete the comma — but watch for the structural problem this often indicates (someone tried to write JSON in a JavaScript file).

Comments

JSON doesn't support comments. // and /* */ both fail. If your config file needs comments, either switch to JSON5/JSONC, or use a leading-underscore convention: { '_note': 'This is a comment', 'actual_field': 1 }. Some tools (including our formatter) strip comments automatically on parse.

Cross-language JSON differences

JSON parsers in different languages have slightly different defaults. Things that work in Python may break in JavaScript and vice versa.

  • Python's json module accepts NaN, Infinity, and -Infinity as literals by default. JavaScript's JSON.parse does not. Use allow_nan=False in Python if your data crosses language boundaries.
  • Python's json preserves the order of object keys (since 3.7); some older JavaScript engines didn't. Don't rely on order across language boundaries.
  • Go's encoding/json marshals time.Time as RFC 3339 strings. JavaScript's JSON.stringify on a Date object also gives ISO 8601 strings. But Python's json.dumps doesn't know how to serialize datetime objects without a custom encoder — easy source of TypeError bugs.
  • Java's Jackson and Gson have different defaults for handling null fields. Jackson includes them by default; Gson omits them. Configure explicitly.
  • Rust's serde_json is strict about types — strings can't be parsed as numbers, even if they look like numbers. JavaScript is more lenient. If your TypeScript client and your Rust server disagree on a field's type, this is often the cause.

Network-level issues that look like JSON bugs

Sometimes the JSON is fine but the network mangled it.

Gzip not decompressed

If you see binary garbage at the start of the response, the server compressed it but your client didn't decompress. Check the Content-Encoding header — if it says gzip or br (Brotli), you need to decompress before parsing. Curl handles this with the --compressed flag.

Partial responses

A connection drop mid-response gives you valid prefix but truncated suffix. The parser will say 'Unexpected end of input' at some position. Check Content-Length against actual bytes received. If they differ, retry the request.

Wrong endpoint

Sounds obvious but happens constantly: hitting /api/v2/items when the working endpoint is /api/v3/items. The server returns valid JSON for the wrong shape and your consumer fails on the missing field rather than a network error. Pattern: if everything looks correct but the data is wrong, double-check the URL.

CORS preflight failures

Specific to browser clients. If a preflight OPTIONS request fails, your fetch never sees the real response — just a network error. Open DevTools Network tab and look for the OPTIONS request; if it returned 403 or 401, fix the CORS config on the server.

Debugging workflow with browser DevTools

Modern Chrome and Firefox DevTools are the fastest path to bisecting an API bug in a browser context.

  1. Open the page that's failing. Open DevTools (F12) and switch to the Network tab.
  2. Filter by XHR/Fetch to hide HTML/CSS noise. Reproduce the failure.
  3. Find the failing request. Check the Status (200? 4xx? 5xx?), Content-Type header, and Content-Length.
  4. Click into the request, switch to Response. If DevTools shows it as 'Failed to load response' but the request succeeded, it's likely an encoding or content-type mismatch.
  5. Switch to Preview to see DevTools' formatted view. If Preview shows valid JSON but your code fails to parse, the issue is in how your code reads the response (e.g., not awaiting response.json()).
  6. Use 'Copy as fetch' to get a curl-like command you can replay outside the browser. If the replay succeeds, the bug is browser-specific (often CORS or headers).
  7. Use 'Copy as cURL' to share with backend engineers. They can replay against their local server.

Debugging with Postman and Insomnia

Postman has a built-in JSON validator on responses and will highlight syntax errors. Insomnia is similar but lighter. Both let you save responses as files, which you can then paste into a formatter for detailed inspection.

Pro workflow: save your last 'works' response as a known-good baseline. When something breaks, paste both responses into our JSON Compare tool to see exactly what changed at the field level — not the text level. This is dramatically more useful than diff because it ignores whitespace and key order and shows you semantic differences.

CI/CD: catch JSON contract regressions before they ship

If your team owns both ends (client + API), put schema validation in CI. Every PR runs the integration tests; integration tests validate every recorded response against the schema. If the schema breaks, the build fails. This catches the class of bug where the server team changes a field name and the client team finds out in production.

If you don't own both ends, you can still record responses and validate them in a nightly cron. The first failure tells you the upstream API changed without notice — far better than learning from your own users in support tickets.

When the API itself is broken: escalation playbook

Sometimes the bug is on the server. Before escalating, prepare three things to make the conversation fast.

  1. The exact request that failed — URL, method, headers (redact auth tokens), body. 'Copy as cURL' from DevTools is perfect for this.
  2. The exact response you received — status code, all headers, full body. If the response is huge, paste it into the JSON formatter, click 'Copy input', and include the formatted version.
  3. What you expected vs what you got. Reference the schema or a previous known-good response. 'Expected the user_id field to be a string per /v2/users response, got an integer instead.'

Quick reference: common error messages

  • Unexpected token } in JSON at position N — trailing comma. Remove the comma at position N-1.
  • Unexpected token ' in JSON at position N — single quote. Replace with double quote.
  • Unexpected end of JSON input — truncated. Check network for partial response or missing closing bracket.
  • Unexpected number in JSON at position N — usually a stray comma in the middle of an array. Check around position N.
  • Bad escaped character in string — invalid \ sequence. JSON only allows \" \\ \/ \b \f \n \r \t and \uXXXX.
  • Bad Unicode escape — malformed \uXXXX, often a surrogate pair issue. Check the character around the error position.
✨ Try it for free

Open json formatter now

No signup. No upload. Runs entirely in your browser.

Open tool
Written by
Y
Yamini
Content & UX

Writes the tutorials, guides, and behind-the-scenes posts on YaliKit. Focuses on making complex developer topics readable in 5 minutes — and on the user-experience details that decide whether a tool feels useful or just functional.

UX writerTutorials leadPlain-language advocate
Need a tool? Ask us!