这些天试着玩一下ai

Step1:Tensorflow1.15环境的安装和配置
    安装和配置TensorFlow
    https://blog.csdn.net/weixin_40925977/article/details/107306079
    换成TUNA的源
    https://mirrors4.tuna.tsinghua.edu.cn/help/anaconda/
    解决安装中的问题(Solving environment: failed with initial frozen solve. Retrying with flexible solve)
    https://blog.csdn.net/hhhhhhhhhhwwwwwwwwww/article/details/112726892
Step2:用Tensorflow实战例子
    用Tensorflow对MINST识别的训练和测试
    https://blog.csdn.net/qq_32674829/article/details/82867900
    用PyTorch对MINST识别的训练和测试
    https://blog.csdn.net/weixin_44751294/article/details/116240084
    MNIST数据集的简介
    https://yunyaniu.blog.csdn.net/article/details/79094752
    Stanford现实生活中房屋门牌号的数据集
    http://ufldl.stanford.edu/housenumbers/
Step3:一些问题的解决
    import trace error问题的解决
    https://blog.csdn.net/weixin_45953051/article/details/126218041
    小括号和中括号运用不当导致的出错以及解决。
    https://blog.csdn.net/qq_41112170/article/details/124044453

下面是以上的阶段成果,就是copy了前面文章的例子,稍微改了一下(第一行的import和后面有个中括号改成小括号)能运行通过。

#-*- coding: utf-8 -*-
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
mnist = input_data.read_data_sets("C:/tensorflow/MNIST_data/", one_hot=True)

# Parameters
learning_rate = 0.01
training_epochs = 25
batch_size = 100
display_step = 1

# tf Graph Input
x = tf.placeholder(tf.float32, [None, 784]) # mnist data image of shape 28*28=784
y = tf.placeholder(tf.float32, [None, 10]) # 0-9 digits recognition => 10 classes

# Set model weights
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))

# Construct model
pred = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax

# Minimize error using cross entropy
cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()

# Start training
with tf.Session() as sess:
    sess.run(init)

    # Training cycle
    for epoch in range(training_epochs):
        avg_cost = 0.
        total_batch = int(mnist.train.num_examples/batch_size)
        # Loop over all batches
        for i in range(total_batch):
            batch_xs, batch_ys = mnist.train.next_batch(batch_size)
            # Fit training using batch data
            _, c = sess.run((optimizer, cost), feed_dict={x: batch_xs,y: batch_ys})
            # Compute average loss
            avg_cost += c / total_batch
        # Display logs per epoch step
        if (epoch+1) % display_step == 0:
            print("Epoch:", ' %04d ' % (epoch+1), "cost=", "{:.9f}".format(avg_cost))

    print ("Optimization Finished!")

    # Test model
    correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
    # Calculate accuracy for 3000 examples
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    print ("Accuracy:", accuracy.eval({x: mnist.test.images[:3000], y: mnist.test.labels[:3000]}))

 

Step4:进阶实战
    使用最新版本的Tensorflow测试Mnist
    使用Stanford数据集测试
    对比测试CUDA对训练时间的改善
    不用anaconda配置环境
    Ubuntu下环境的安装和对比
    
Step5:PyTorch环境配置和安装

原文地址:http://www.cnblogs.com/shinedream/p/16898255.html

1. 本站所有资源来源于用户上传和网络,如有侵权请邮件联系站长! 2. 分享目的仅供大家学习和交流,请务用于商业用途! 3. 如果你也有好源码或者教程,可以到用户中心发布,分享有积分奖励和额外收入! 4. 本站提供的源码、模板、插件等等其他资源,都不包含技术服务请大家谅解! 5. 如有链接无法下载、失效或广告,请联系管理员处理! 6. 本站资源售价只是赞助,收取费用仅维持本站的日常运营所需! 7. 如遇到加密压缩包,默认解压密码为"gltf",如遇到无法解压的请联系管理员! 8. 因为资源和程序源码均为可复制品,所以不支持任何理由的退款兑现,请斟酌后支付下载 声明:如果标题没有注明"已测试"或者"测试可用"等字样的资源源码均未经过站长测试.特别注意没有标注的源码不保证任何可用性