Skip to content

Commit de13d8c

Browse files
vertex-mg-botcopybara-github
authored andcommitted
Llama 4 notebook fixes
PiperOrigin-RevId: 772098038
1 parent a72b7bc commit de13d8c

1 file changed

Lines changed: 2 additions & 2 deletions

File tree

notebooks/community/model_garden/model_garden_pytorch_llama4_deployment.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1075,8 +1075,8 @@
10751075
"\n",
10761076
"# @markdown Next fill out some request parameters:\n",
10771077
"\n",
1078-
"user_image = \"https://upload.wikimedia.org/wikipedia/commons/thumb/c/cb/The_Blue_Marble_%28remastered%29.jpg/580px-The_Blue_Marble_%28remastered%29.jpg\" # @param {type: \"string\"}\n",
1079-
"user_message = \"What is in the image?\" # @param {type: \"string\"}\n",
1078+
"user_image = \"https://images.google.com/images/branding/googlelogo/2x/googlelogo_color_272x92dp.png\" # @param {type: \"string\"}\n",
1079+
"user_message = \"What is in the <|image|>?\" # @param {type: \"string\"}\n",
10801080
"# @markdown If you encounter the issue like `ServiceUnavailable: 503 Took too long to respond when processing`, you can reduce the maximum number of output tokens, such as set `max_tokens` as 20.\n",
10811081
"max_tokens = 50 # @param {type: \"integer\"}\n",
10821082
"temperature = 1.0 # @param {type: \"number\"}\n",

0 commit comments

Comments
 (0)