What exactly is the correct JSON format for Postman?

I’m trying to import a very large JSON file containing both Latin and Chinese characters through the import button. Unfortunately, a message pops up telling me the format is not recognized.

What do I need to do to this file for it to be able to fulfill Postman’s requirements?

I’ve tried another extremely small and simple file which is just one object inside another and I get the same error message.

However, one of my files does fulfill the requirements and I don’t get an error message.

The first screenshot shows the file that Postman allows and the second and third screenshots show the files that Postman rejects.

Capture13

The first file looks like a Collection that Postman recognises as a known format and will create that in the Workspace.

I’m not sure what the second two files are or what you’re using them for?

The second file is the only file I actually want to import. I was just testing the first and third files.

The second file is a very large collection of Chinese/English translations for a dictionary I’m trying to build.

After importing, I want to create an index that I can reference using a search engine API (MeiliSearch) so I can access the translations through a search bar.

How can I turn this file into a collection Postman recognizes?

The Import feature will only be able to parse known formats, it’s not a general-purpose importer. It requires the file to be in a specific format - hence the reason why you would see an error message if it’s not.

The Collection format in most basic would be this:

{
	"info": {
		"name": "New Collection",
		"schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json"
	},
	"item": []
}

That would display, once imported, an empty Collection.

As you add more Folders and Requests to the Collection, the structure will start to look more like this:

{
	"info": {
		"name": "New Collection",
		"schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json"
	},
	"item": [
		{
			"name": "New Folder",
			"item": [
				{
					"name": "New Request",
					"request": {
						"method": "GET",
						"header": [],
						"url": {
							"raw": "https://postman-echo.com/get",
							"protocol": "https",
							"host": [
								"postman-echo",
								"com"
							],
							"path": [
								"get"
							]
						}
					},
					"response": []
				}
			]
		}
	]
}

This will increase in size and change as you add more complex requests and items, such as Collection Variables to the Collection.

I’m not really sure what your final requests would look like, given the file that you have created so I couldn’t give you a better answer.

This is the raw schema for the Postman Collection Format v2.1.0

Got it.

Thanks for the help.

So instead of importing that large file with all the English/Chinese translations, I should create an empty Collection first, then add more folders and requests containing my data to the Collection through the Postman interface?

Wouldn’t that take forever given that my file contains tens of thousands of translations?

I’m struggling to see a quick and simple solution here. Is there one?

It doesn’t need to be created in the Postman UI but for Postman to recognise what the file is, it needs to be in a known format.

In its current form, Postman has no way of mapping that to anything as it does not know what you want to do with it.

I understand but how can I make a file as large as mine, with as many lines of code as it has, fit into Postman’s required format?

I’ve looked for resources/tutorials online for how to do this but no luck.

What does a single request look like? How does the data in the file feed into that request?

I don’t know what your file data is used for so I’m unable to offer any solutions.

An example GET request - {{url}}/indexes/translation/documents

An example index - [ { "uid": "translation", "name": "translation", "createdAt": "2022-01-10T03:37:45.388463500Z", "updatedAt": "2022-01-10T03:37:45.389154Z", "primaryKey": "translation_id" } ]

Then, ideally, each definition will be assigned a translation_id to be linked to the primary key.

The file data is to be indexed by the MeiliSearch API for a search engine.

Apologies if I’m still unable to make myself clear. I’m very new to this.