will help you The error indicates a rank 2 (matrix) is expected but the value is actually rank 1 (a vector). I suspect this is because np.tostring() returns a single string rather than a list of strings. I think that is somewhat tangential as I don't think your float-to-string and string-to-float conversions are consistent. You convert float-to-string using numpy's builtin tostring() method. That returns the byte representation of the data: i.e.
import numpy as np
x = np.array([1.0, 2.0])
fixed the issue. Will look into that further If your code is running on a Google Compute Instance, and the instance has the correct scopes, you don't need to set any environment variables. You can confirm the scopes by looking at the instance in the Developers Console or by asking the metadata server:
wish help you to fix your issue You can look into the Audit Logs to determine who did what, where, and when. Further, you can use the Stackdriver Logging API method entries.list to retrieve audit log entries for your use case. Also you can choose use the Activity Logs to know the details such as the authorized user who made the API request.
this will help 2GB is large. That's a heckuva big image. You should be able to cut that down to 100MB, perhaps using Alpine instead of Ubuntu. Copying 4GB of data is also less than ideal. Given that, I suspect the solution will be more of an architecture change than a code change.
How to connect to "Google Cloud BigQuery" public dataset from "Google Cloud functions"
from google.cloud import bigquery
def hello_pubsub(event, context):
"""Triggered from a message on a Cloud Pub/Sub topic.
event (dict): Event payload.
context (google.cloud.functions.Context): Metadata for the event.
client = bigquery.Client(project="<your-project>")
query_str = "SELECT * FROM `iotcoretutorial-xxxxxx.DHT11.DHT11Data` WHERE temperature > 24"
job = client.query(
# Location must match that of the dataset(s) referenced in the query.
#Wait for job to finish
#Get results as a dataframe
#This requires pandas
#You can do something different with your results here if you want
ans_df = job.to_dataframe()
pubsub_message = base64.b64decode(event['data']).decode('utf-8')
print("Hello " + pubsub_message)
Google Cloud CDN started ignoring query strings for storage buckets
I hope this helps . We were affected by this also. After contacting Google Support, they have confirmed this is a permanent change. The recommended work around is to either use versioning in the object name, or use cache invalidation. The latter sounds a bit odd as the cache invalidation documentation states: