一个供用户以Python Dict或JSON格式编写(科研中实验)配置的库,在代码中用点.读写属性,同时可以从命令行中读取参数配置并修改参数值。支持字典内参数无限层级嵌套自动版本检查支持参数值限定为指定值(枚举)支持元组类型tuple支持从本地JSON文件中读取配置可设置参数帮助,并通过命令行-h打印参数说明文档更新提供简单示例下面将给出一个例子来证明此工具相比于argparse工具的便利性。使用argparse工具需要写的代码:parser = argparse.ArgumentParser(description='PyTorch local error training')parser.add_argument('--model', default='vgg8b', help='model, mlp, vgg13, vgg16, vgg19, vgg8b, vgg11b, resnet18, resnet34, wresnet28-10 and more (default: vgg8b)')parser.add_argument('--dataset', default='CIFAR10', help='dataset, MNIST, KuzushijiMNIST, FashionMNIST, CIFAR10, CIFAR100, SVHN, STL10 or ImageNet (default: CIFAR10)')parser.add_argument('--batch-size', type=int, default=128, help='input batch size for training (default: 128)')parser.add_argument('--num-layers', type=int, default=1, help='number of hidden fully-connected layers for mlp and vgg models (default: 1')parser.add_argument('--lr', type=float, default=5e-4, help='initial learning rate (default: 5e-4)')parser.add_argument('--lr-decay-milestones', nargs='+', type=int, default=[200,300,350,375], help='decay learning rate at these milestone epochs (default: [200,300,350,375])')parser.add_argument('--optim', default='adam', help='optimizer, adam, amsgrad or sgd (default: adam)')parser.add_argument('--beta', type=float, default=0.99, help='fraction of similarity matching loss in predsim loss (default: 0.99)')args = parser.parse_args()args.cuda = not args.no_cuda and torch.cuda.is_available()if args.cuda: cudnn.enabled = True cudnn.benchmark = True转换为此工具后需要写的代码:''':param model: model, mlp, vgg13, vgg16, vgg19, vgg8b, vgg11b, resnet18, resnet34, wresnet28-10 and more (default: vgg8b):param dataset: dataset, MNIST, KuzushijiMNIST, FashionMNIST, CIFAR10, CIFAR100, SVHN, STL10 or ImageNet (default: CIFAR10):param batch-size: input batch size for training (default: 128):param num-layers: number of hidden fully-connected layers for mlp and vgg models (default: 1):param lr: initial learning rate (default: 5e-4):param lr-decay-milestones: decay learning rate at these milestone epochs (default: [200,300,350,375]):param optim: optimizer, adam, amsgrad or sgd (default: adam):param beta: fraction of similarity matching loss in predsim loss (default: 0.99)'''config = { 'model':'vgg8b', 'dataset':'CIFAR10', 'batch-size':128, 'num-layers':1, 'lr':5e-4, 'lr-decay-milestones':[200,300,350,375], 'optim':'adam', 'beta':0.99,}args = Config(config, name='PyTorch local error training')args.cuda = not args.no_cuda and torch.cuda.is_available()if args.cuda: cudnn.enabled = True cudnn.benchmark = True可以看到,代码量降低并且更加结构,整洁。

声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!

下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态
评论