diff --git a/docs/source/guide/predictions.md b/docs/source/guide/predictions.md
index 6c1ae61b05b2..e97b0277b2bc 100644
--- a/docs/source/guide/predictions.md
+++ b/docs/source/guide/predictions.md
@@ -73,14 +73,16 @@ The `predictions` array also depends on the labeling configuration. Some pre-ann
| `result.from_name` | string | String used to reference the labeling configuration `from_name` for the type of labeling being performed. Must match the labeling configuration. |
| `result.to_name` | string | String used to reference the labeling configuration `to_name` for the type of labeling being performed. Must match the labeling configuration. |
| `result.type` | string | Specify the labeling tag for the type of labeling being performed. For example, a named entity recognition task has a type of `labels`. |
+| `result.readonly` | bool | readonly mode for a specific region |
+| `result.hidden` | bool | default visibility (eye icon) for a specific region |
Other types of annotation contain specific fields. You can review the [examples on this page](#Specific-examples-for-pre-annotations), or review the [tag documentation for the Object and Control tags](/tags) in your labeling configuration labeling-specific `result` objects. For example, the [Audio tag](tags/audio.html), [HyperText tag](tags/hypertext.html), [Paragraphs tag](tags/paragraphs.html), [KeyPointLabels](/tags/keypointlabels.html) and more all contain sample `result` JSON examples.
> Note: If you're generating pre-annotations for a [custom ML backend](ml_create.html), you can use the `self.parsed_label_config` variable to retrieve the labeling configuration for a project and generate pre-annotations. See the [custom ML backend](ml_create.html) documentation for more details.
-## Import pre-annotations for images
+## Import bbox and choice pre-annotations for images
-For example, import predicted labels for tasks to determine whether an item in an image is an airplane or a car.
+For example, import predicted **bounding box regions (rectangles)** and **choices** for tasks to determine whether an item in an image is an airplane or a car.
For image pre-annotations, Label Studio expects the x, y, width, and height of image annotations to be provided in percentages of overall image dimension. See [Units for image annotations](predictions.html#Units_for_image_annotations) on this page for more about how to convert formats.
@@ -112,10 +114,12 @@ Save this example JSON as a file to import it into Label Studio, for example, `e
{% codeblock lang:json %}
[{
"data": {
- "image": "http://localhost:8080/static/samples/sample.jpg"
+ "image": "/static/samples/sample.jpg"
},
"predictions": [{
+ "model_version": "one",
+ "score": 0.5,
"result": [
{
"id": "result1",
@@ -150,8 +154,7 @@ Save this example JSON as a file to import it into Label Studio, for example, `e
"value": {
"choices": ["Airbus"]
}
- }],
- "score": 0.95
+ }]
}]
}]
{% endcodeblock %}
@@ -170,21 +173,20 @@ Import pre-annotated tasks into Label Studio [using the UI](tasks.html#Import-da
In the Label Studio UI, the imported prediction for this task looks like the following: