Confusion matrix for a data frame with two predictions
Date : March 29 2020, 07:55 AM
To fix the issue you can do I have a data frame: , Are you looking for this?(Better to show the expected output) table(df1$Observations>0,df1$Predgreedy)
-0.05 0.02 0.25
FALSE 0 1 0
TRUE 1 0 1
table(df1$Observations>0,df1$Predlinear)
-0.02 0.12 0.15
FALSE 0 0 1
TRUE 1 1 0
|
Test data predictions yield random results when making predictions from a saved model
Date : March 29 2020, 07:55 AM
this one helps. In the second script, the use of glob creates a list of tiff files that are unordered. For this approach to work, you need an ordered list of tiff files (e.g. [00001.tif, 00002.tif, ... 1234.tif]) that can be associated with the ordered predictions. The sorted() function can be used to do the ordering. tiles = sorted(glob.glob(os.path.join(inws, '*.tif')))
|
How to pass a bigger .csv files to amazon sagemaker for predictions using batch transform jobs
Date : March 29 2020, 07:55 AM
Does that help The error looks to be coming from a GRPC client closing the connection before the server is able to respond. (There looks to be an existing feature request for the sagemaker tensorflow container on https://github.com/aws/sagemaker-tensorflow-container/issues/46 to make this timeout configurable) You could try out a few things with the sagemaker Transformer to limit the size of each individual request so that it fits within the timeout:
|
how can I preprocess input data before making predictions in sagemaker?
Date : March 29 2020, 07:55 AM
|
Preprocess input data before making predictions inside Amazon SageMaker
Tag : python , By : Doc Immortal
Date : March 29 2020, 07:55 AM
With these it helps I had the same problem and finally figured out how to do it. Once you have your model_data ready, you can deploy it with the following lines. from sagemaker.tensorflow.model import TensorFlowModel
sagemaker_model = TensorFlowModel(
model_data = 's3://path/to/model/model.tar.gz',
role = role,
framework_version = '1.12',
entry_point = 'train.py',
source_dir='my_src',
env={'SAGEMAKER_REQUIREMENTS': 'requirements.txt'}
)
predictor = sagemaker_model.deploy(
initial_instance_count=1,
instance_type='ml.m4.xlarge',
endpoint_name='resnet-tensorflow-classifier'
)
import io
import numpy as np
from PIL import Image
from keras.applications.resnet50 import preprocess_input
from keras.preprocessing import image
JPEG_CONTENT_TYPE = 'image/jpeg'
# Deserialize the Invoke request body into an object we can perform prediction on
def input_fn(request_body, content_type=JPEG_CONTENT_TYPE):
# process an image uploaded to the endpoint
if content_type == JPEG_CONTENT_TYPE:
img = Image.open(io.BytesIO(request_body)).resize((300, 300))
img_array = np.array(img)
expanded_img_array = np.expand_dims(img_array, axis=0)
x = preprocess_input(expanded_img_array)
return x
else:
raise errors.UnsupportedFormatError(content_type)
absl-py==0.7.1
astor==0.8.0
backports.weakref==1.0.post1
enum34==1.1.6
funcsigs==1.0.2
futures==3.2.0
gast==0.2.2
grpcio==1.20.1
h5py==2.9.0
Keras==2.2.4
Keras-Applications==1.0.7
Keras-Preprocessing==1.0.9
Markdown==3.1.1
mock==3.0.5
numpy==1.16.3
Pillow==6.0.0
protobuf==3.7.1
PyYAML==5.1
scipy==1.2.1
six==1.12.0
tensorboard==1.13.1
tensorflow==1.13.1
tensorflow-estimator==1.13.0
termcolor==1.1.0
virtualenv==16.5.0
Werkzeug==0.15.4
|