Implementations
Here is an example of how to use L-BFGS for logistic regression:
In this example, we first load the MNIST dataset and split it into training and test sets.
Python3
# Import the necessary libraries from sklearn.linear_model import LogisticRegression from sklearn.datasets import load_digits from sklearn.model_selection import train_test_split # Load the MNIST dataset digits = load_digits() # Split the data into training and test sets X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target, test_size = 0.2 , random_state = 42 ) # Create an instance of LogisticRegression # with the 'lbfgs' solver and L2 penalty clf = LogisticRegression(solver = 'lbfgs' , penalty = 'l2' , max_iter = 10000 ) # Fit the model to the training data clf.fit(X_train, y_train) # Evaluate the model on the test data accuracy = clf.score(X_test, y_test) print ( "Logistic regression Accuracy:" , accuracy) |
Output:
Logistic regression Accuracy: 0.9722222222222222
Applying various online solvers and computing the accuracy
Python3
# Import the necessary libraries from sklearn.datasets import load_digits from sklearn.linear_model import LogisticRegression, SGDClassifier from sklearn.linear_model import PassiveAggressiveClassifier, Perceptron from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score # load digits dataset digits = load_digits() # split data into train and test sets X_train, X_test, y_train, y_test = train_test_split( digits.data, digits.target, test_size = 0.3 ) # define solvers to compare solvers = [ ( 'SAG' , LogisticRegression(penalty = 'l2' , solver = 'sag' , max_iter = 100 )), ( 'SAGA' , LogisticRegression(penalty = 'l1' , solver = 'saga' , max_iter = 100 )), ( 'L-BFGS' , LogisticRegression(penalty = 'l2' , solver = 'lbfgs' , max_iter = 100 )), ( 'liblinear' , LogisticRegression(penalty = 'l1' , solver = 'liblinear' , max_iter = 100 )), ( 'SGD' , SGDClassifier(loss = 'log' , max_iter = 100 )), ( 'Passive-Aggressive' , PassiveAggressiveClassifier(max_iter = 100 )), ( 'Perceptron' , Perceptron(max_iter = 100 )) ] # train and evaluate each solver for name, clf in solvers: clf.fit(X_train, y_train) y_pred = clf.predict(X_test) acc = accuracy_score(y_test, y_pred) print (f "{name} accuracy: {acc}" ) |
Output:
SAG accuracy: 0.9648148148148148 SAGA accuracy: 0.9703703703703703 L-BFGS accuracy: 0.9592592592592593 liblinear accuracy: 0.9648148148148148 SGD accuracy: 0.9518518518518518 Passive-Aggressive accuracy: 0.9574074074074074 Perceptron accuracy: 0.937037037037037
Comparing various online solvers in Scikit Learn
Scikit Learn is a popular Python library that provides a wide range of machine-learning algorithms and tools. One of the key features of Scikit Learn is the ability to solve optimization problems using various online solvers. In this article, we will compare some of the most commonly used online solvers in Scikit Learn.
Contact Us