|
Installation
|
Intern |
doFollow |
|
Install with pip
|
Intern |
doFollow |
|
Install with Conda
|
Intern |
doFollow |
|
Install from Source
|
Intern |
doFollow |
|
Editable Install
|
Intern |
doFollow |
|
Install PyTorch with CUDA support
|
Intern |
doFollow |
|
Quickstart
|
Intern |
doFollow |
|
Sentence Transformer
|
Intern |
doFollow |
|
Cross Encoder
|
Intern |
doFollow |
|
Sparse Encoder
|
Intern |
doFollow |
|
Next Steps
|
Intern |
doFollow |
|
Migration Guide
|
Intern |
doFollow |
|
Migrating from v4.x to v5.x
|
Intern |
doFollow |
|
Migration for model.encode
|
Intern |
doFollow |
|
Migration for Asym to Router
|
Intern |
doFollow |
|
Migration of advanced usage
|
Intern |
doFollow |
|
Migrating from v3.x to v4.x
|
Intern |
doFollow |
|
Migration for CrossEncoder evaluators
|
Intern |
doFollow |
|
Migrating from v2.x to v3.x
|
Intern |
doFollow |
|
Usage
|
Intern |
doFollow |
|
Computing Embeddings
|
Intern |
doFollow |
|
Initializing a Sentence Transformer Model
|
Intern |
doFollow |
|
Calculating Embeddings
|
Intern |
doFollow |
|
Prompt Templates
|
Intern |
doFollow |
|
Input Sequence Length
|
Intern |
doFollow |
|
Multi-Process / Multi-GPU Encoding
|
Intern |
doFollow |
|
Semantic Textual Similarity
|
Intern |
doFollow |
|
Similarity Calculation
|
Intern |
doFollow |
|
Semantic Search
|
Intern |
doFollow |
|
Background
|
Intern |
doFollow |
|
Symmetric vs. Asymmetric Semantic Search
|
Intern |
doFollow |
|
Manual Implementation
|
Intern |
doFollow |
|
Optimized Implementation
|
Intern |
doFollow |
|
Speed Optimization
|
Intern |
doFollow |
|
Elasticsearch
|
Intern |
doFollow |
|
OpenSearch
|
Intern |
doFollow |
|
Approximate Nearest Neighbor
|
Intern |
doFollow |
|
Retrieve & Re-Rank
|
Intern |
doFollow |
|
Examples
|
Intern |
doFollow |
|
Retrieve & Re-Rank
|
Intern |
doFollow |
|
Retrieve & Re-Rank Pipeline
|
Intern |
doFollow |
|
Retrieval: Bi-Encoder
|
Intern |
doFollow |
|
Re-Ranker: Cross-Encoder
|
Intern |
doFollow |
|
Example Scripts
|
Intern |
doFollow |
|
Pre-trained Bi-Encoders (Retrieval)
|
Intern |
doFollow |
|
Pre-trained Cross-Encoders (Re-Ranker)
|
Intern |
doFollow |
|
Clustering
|
Intern |
doFollow |
|
k-Means
|
Intern |
doFollow |
|
Agglomerative Clustering
|
Intern |
doFollow |
|
Fast Clustering
|
Intern |
doFollow |
|
Topic Modeling
|
Intern |
doFollow |
|
Paraphrase Mining
|
Intern |
doFollow |
|
Translated Sentence Mining
|
Intern |
doFollow |
|
Margin Based Mining
|
Intern |
doFollow |
|
Examples
|
Intern |
doFollow |
|
Image Search
|
Intern |
doFollow |
|
Installation
|
Intern |
doFollow |
|
Usage
|
Intern |
doFollow |
|
Examples
|
Intern |
doFollow |
|
Embedding Quantization
|
Intern |
doFollow |
|
Binary Quantization
|
Intern |
doFollow |
|
Scalar (int8) Quantization
|
Intern |
doFollow |
|
Additional extensions
|
Intern |
doFollow |
|
Demo
|
Intern |
doFollow |
|
Try it yourself
|
Intern |
doFollow |
|
Creating Custom Models
|
Intern |
doFollow |
|
Structure of Sentence Transformer Models
|
Intern |
doFollow |
|
Sentence Transformer Model from a Transformers Model
|
Intern |
doFollow |
|
Advanced: Custom Modules
|
Intern |
doFollow |
|
Evaluation with MTEB
|
Intern |
doFollow |
|
Installation
|
Intern |
doFollow |
|
Evaluation
|
Intern |
doFollow |
|
Additional Arguments
|
Intern |
doFollow |
|
Results Handling
|
Intern |
doFollow |
|
Leaderboard Submission
|
Intern |
doFollow |
|
Speeding up Inference
|
Intern |
doFollow |
|
PyTorch
|
Intern |
doFollow |
|
ONNX
|
Intern |
doFollow |
|
OpenVINO
|
Intern |
doFollow |
|
Benchmarks
|
Intern |
doFollow |
|
Pretrained Models
|
Intern |
doFollow |
|
Original Models
|
Intern |
doFollow |
|
Semantic Search Models
|
Intern |
doFollow |
|
Multi-QA Models
|
Intern |
doFollow |
|
MSMARCO Passage Models
|
Intern |
doFollow |
|
Multilingual Models
|
Intern |
doFollow |
|
Semantic Similarity Models
|
Intern |
doFollow |
|
Bitext Mining
|
Intern |
doFollow |
|
Image & Text-Models
|
Intern |
doFollow |
|
INSTRUCTOR models
|
Intern |
doFollow |
|
Scientific Similarity Models
|
Intern |
doFollow |
|
Training Overview
|
Intern |
doFollow |
|
Why Finetune?
|
Intern |
doFollow |
|
Training Components
|
Intern |
doFollow |
|
Model
|
Intern |
doFollow |
|
Dataset
|
Intern |
doFollow |
|
Dataset Format
|
Intern |
doFollow |
|
Loss Function
|
Intern |
doFollow |
|
Training Arguments
|
Intern |
doFollow |
|
Evaluator
|
Intern |
doFollow |
|
Trainer
|
Intern |
doFollow |
|
Callbacks
|
Intern |
doFollow |
|
Multi-Dataset Training
|
Intern |
doFollow |
|
Deprecated Training
|
Intern |
doFollow |
|
Best Base Embedding Models
|
Intern |
doFollow |
|
Comparisons with CrossEncoder Training
|
Intern |
doFollow |
|
Dataset Overview
|
Intern |
doFollow |
|
Datasets on the Hugging Face Hub
|
Intern |
doFollow |
|
Pre-existing Datasets
|
Intern |
doFollow |
|
Loss Overview
|
Intern |
doFollow |
|
Loss Table
|
Intern |
doFollow |
|
Loss modifiers
|
Intern |
doFollow |
|
Distillation
|
Intern |
doFollow |
|
Commonly used Loss Functions
|
Intern |
doFollow |
|
Custom Loss Functions
|
Intern |
doFollow |
|
Training Examples
|
Intern |
doFollow |
|
Semantic Textual Similarity
|
Intern |
doFollow |
|
Training data
|
Intern |
doFollow |
|
Loss Function
|
Intern |
doFollow |
|
Natural Language Inference
|
Intern |
doFollow |
|
Data
|
Intern |
doFollow |
|
SoftmaxLoss
|
Intern |
doFollow |
|
MultipleNegativesRankingLoss
|
Intern |
doFollow |
|
Paraphrase Data
|
Intern |
doFollow |
|
Pre-Trained Models
|
Intern |
doFollow |
|
Quora Duplicate Questions
|
Intern |
doFollow |
|
Training
|
Intern |
doFollow |
|
MultipleNegativesRankingLoss
|
Intern |
doFollow |
|
Pretrained Models
|
Intern |
doFollow |
|
MS MARCO
|
Intern |
doFollow |
|
Bi-Encoder
|
Intern |
doFollow |
|
Matryoshka Embeddings
|
Intern |
doFollow |
|
Use Cases
|
Intern |
doFollow |
|
Results
|
Intern |
doFollow |
|
Training
|
Intern |
doFollow |
|
Inference
|
Intern |
doFollow |
|
Code Examples
|
Intern |
doFollow |
|
Adaptive Layers
|
Intern |
doFollow |
|
Use Cases
|
Intern |
doFollow |
|
Results
|
Intern |
doFollow |
|
Training
|
Intern |
doFollow |
|
Inference
|
Intern |
doFollow |
|
Code Examples
|
Intern |
doFollow |
|
Multilingual Models
|
Intern |
doFollow |
|
Extend your own models
|
Intern |
doFollow |
|
Training
|
Intern |
doFollow |
|
Datasets
|
Intern |
doFollow |
|
Sources for Training Data
|
Intern |
doFollow |
|
Evaluation
|
Intern |
doFollow |
|
Available Pre-trained Models
|
Intern |
doFollow |
|
Usage
|
Intern |
doFollow |
|
Performance
|
Intern |
doFollow |
|
Citation
|
Intern |
doFollow |
|
Model Distillation
|
Intern |
doFollow |
|
Knowledge Distillation
|
Intern |
doFollow |
|
Speed - Performance Trade-Off
|
Intern |
doFollow |
|
Dimensionality Reduction
|
Intern |
doFollow |
|
Quantization
|
Intern |
doFollow |
|
Augmented SBERT
|
Intern |
doFollow |
|
Motivation
|
Intern |
doFollow |
|
Extend to your own datasets
|
Intern |
doFollow |
|
Methodology
|
Intern |
doFollow |
|
Scenario 1: Limited or small annotated datasets (few labeled sentence-pairs)
|
Intern |
doFollow |
|
Scenario 2: No annotated datasets (Only unlabeled sentence-pairs)
|
Intern |
doFollow |
|
Training
|
Intern |
doFollow |
|
Citation
|
Intern |
doFollow |
|
Training with Prompts
|
Intern |
doFollow |
|
What are Prompts?
|
Intern |
doFollow |
|
Why would we train with Prompts?
|
Intern |
doFollow |
|
How do we train with Prompts?
|
Intern |
doFollow |
|
Training with PEFT Adapters
|
Intern |
doFollow |
|
Compatibility Methods
|
Intern |
doFollow |
|
Adding a New Adapter
|
Intern |
doFollow |
|
Loading a Pretrained Adapter
|
Intern |
doFollow |
|
Training Script
|
Intern |
doFollow |
|
Unsupervised Learning
|
Intern |
doFollow |
|
TSDAE
|
Intern |
doFollow |
|
SimCSE
|
Intern |
doFollow |
|
CT
|
Intern |
doFollow |
|
CT (In-Batch Negative Sampling)
|
Intern |
doFollow |
|
Masked Language Model (MLM)
|
Intern |
doFollow |
|
GenQ
|
Intern |
doFollow |
|
GPL
|
Intern |
doFollow |
|
Performance Comparison
|
Intern |
doFollow |
|
Domain Adaptation
|
Intern |
doFollow |
|
Domain Adaptation vs. Unsupervised Learning
|
Intern |
doFollow |
|
Adaptive Pre-Training
|
Intern |
doFollow |
|
GPL: Generative Pseudo-Labeling
|
Intern |
doFollow |
|
Hyperparameter Optimization
|
Intern |
doFollow |
|
HPO Components
|
Intern |
doFollow |
|
Putting It All Together
|
Intern |
doFollow |
|
Example Scripts
|
Intern |
doFollow |
|
Distributed Training
|
Intern |
doFollow |
|
Comparison
|
Intern |
doFollow |
|
FSDP
|
Intern |
doFollow |
|
Usage
|
Intern |
doFollow |
|
Cross-Encoder vs Bi-Encoder
|
Intern |
doFollow |
|
Cross-Encoder vs. Bi-Encoder
|
Intern |
doFollow |
|
When to use Cross- / Bi-Encoders?
|
Intern |
doFollow |
|
Cross-Encoders Usage
|
Intern |
doFollow |
|
Combining Bi- and Cross-Encoders
|
Intern |
doFollow |
|
Training Cross-Encoders
|
Intern |
doFollow |
|
Speeding up Inference
|
Intern |
doFollow |
|
PyTorch
|
Intern |
doFollow |
|
ONNX
|
Intern |
doFollow |
|
OpenVINO
|
Intern |
doFollow |
|
Benchmarks
|
Intern |
doFollow |
|
Pretrained Models
|
Intern |
doFollow |
|
MS MARCO
|
Intern |
doFollow |
|
SQuAD (QNLI)
|
Intern |
doFollow |
|
STSbenchmark
|
Intern |
doFollow |
|
Quora Duplicate Questions
|
Intern |
doFollow |
|
NLI
|
Intern |
doFollow |
|
Community Models
|
Intern |
doFollow |
|
Training Overview
|
Intern |
doFollow |
|
Why Finetune?
|
Intern |
doFollow |
|
Training Components
|
Intern |
doFollow |
|
Model
|
Intern |
doFollow |
|
Dataset
|
Intern |
doFollow |
|
Dataset Format
|
Intern |
doFollow |
|
Hard Negatives Mining
|
Intern |
doFollow |
|
Loss Function
|
Intern |
doFollow |
|
Training Arguments
|
Intern |
doFollow |
|
Evaluator
|
Intern |
doFollow |
|
Trainer
|
Intern |
doFollow |
|
Callbacks
|
Intern |
doFollow |
|
Multi-Dataset Training
|
Intern |
doFollow |
|
Training Tips
|
Intern |
doFollow |
|
Deprecated Training
|
Intern |
doFollow |
|
Comparisons with SentenceTransformer Training
|
Intern |
doFollow |
|
Loss Overview
|
Intern |
doFollow |
|
Loss Table
|
Intern |
doFollow |
|
Distillation
|
Intern |
doFollow |
|
Commonly used Loss Functions
|
Intern |
doFollow |
|
Custom Loss Functions
|
Intern |
doFollow |
|
Training Examples
|
Intern |
doFollow |
|
Semantic Textual Similarity
|
Intern |
doFollow |
|
Training data
|
Intern |
doFollow |
|
Loss Function
|
Intern |
doFollow |
|
Inference
|
Intern |
doFollow |
|
Natural Language Inference
|
Intern |
doFollow |
|
Data
|
Intern |
doFollow |
|
CrossEntropyLoss
|
Intern |
doFollow |
|
Inference
|
Intern |
doFollow |
|
Quora Duplicate Questions
|
Intern |
doFollow |
|
Training
|
Intern |
doFollow |
|
Inference
|
Intern |
doFollow |
|
MS MARCO
|
Intern |
doFollow |
|
Cross Encoder
|
Intern |
doFollow |
|
Training Scripts
|
Intern |
doFollow |
|
Inference
|
Intern |
doFollow |
|
Rerankers
|
Intern |
doFollow |
|
BinaryCrossEntropyLoss
|
Intern |
doFollow |
|
CachedMultipleNegativesRankingLoss
|
Intern |
doFollow |
|
Inference
|
Intern |
doFollow |
|
Model Distillation
|
Intern |
doFollow |
|
Cross Encoder Knowledge Distillation
|
Intern |
doFollow |
|
Inference
|
Intern |
doFollow |
|
Usage
|
Intern |
doFollow |
|
Computing Sparse Embeddings
|
Intern |
doFollow |
|
Initializing a Sparse Encoder Model
|
Intern |
doFollow |
|
Calculating Embeddings
|
Intern |
doFollow |
|
Input Sequence Length
|
Intern |
doFollow |
|
Controlling Sparsity
|
Intern |
doFollow |
|
Interpretability with SPLADE Models
|
Intern |
doFollow |
|
Multi-Process / Multi-GPU Encoding
|
Intern |
doFollow |
|
Semantic Textual Similarity
|
Intern |
doFollow |
|
Similarity Calculation
|
Intern |
doFollow |
|
Semantic Search
|
Intern |
doFollow |
|
Manual Search
|
Intern |
doFollow |
|
Vector Database Search
|
Intern |
doFollow |
|
Qdrant Integration
|
Intern |
doFollow |
|
OpenSearch Integration
|
Intern |
doFollow |
|
Elasticsearch Integration
|
Intern |
doFollow |
|
Seismic Integration
|
Intern |
doFollow |
|
SPLADE-index Integration
|
Intern |
doFollow |
|
Retrieve & Re-Rank
|
Intern |
doFollow |
|
Overview
|
Intern |
doFollow |
|
Interactive Demo: Simple Wikipedia Search
|
Intern |
doFollow |
|
Comprehensive Evaluation: Hybrid Search Pipeline
|
Intern |
doFollow |
|
Pre-trained Models
|
Intern |
doFollow |
|
Sparse Encoder Evaluation
|
Intern |
doFollow |
|
Example with Retrieval Evaluation:
|
Intern |
doFollow |
|
Speeding up Inference
|
Intern |
doFollow |
|
PyTorch
|
Intern |
doFollow |
|
ONNX
|
Intern |
doFollow |
|
OpenVINO
|
Intern |
doFollow |
|
Benchmarks
|
Intern |
doFollow |
|
Pretrained Models
|
Intern |
doFollow |
|
Core SPLADE Models
|
Intern |
doFollow |
|
Inference-Free SPLADE Models
|
Intern |
doFollow |
|
Model Collections
|
Intern |
doFollow |
|
Training Overview
|
Intern |
doFollow |
|
Why Finetune?
|
Intern |
doFollow |
|
Training Components
|
Intern |
doFollow |
|
Model
|
Intern |
doFollow |
|
Dataset
|
Intern |
doFollow |
|
Dataset Format
|
Intern |
doFollow |
|
Loss Function
|
Intern |
doFollow |
|
Training Arguments
|
Intern |
doFollow |
|
Evaluator
|
Intern |
doFollow |
|
Trainer
|
Intern |
doFollow |
|
Callbacks
|
Intern |
doFollow |
|
Multi-Dataset Training
|
Intern |
doFollow |
|
Training Tips
|
Intern |
doFollow |
|
Loss Overview
|
Intern |
doFollow |
|
Sparse specific Loss Functions
|
Intern |
doFollow |
|
SPLADE Loss
|
Intern |
doFollow |
|
CSR Loss
|
Intern |
doFollow |
|
Loss Table
|
Intern |
doFollow |
|
Distillation
|
Intern |
doFollow |
|
Commonly used Loss Functions
|
Intern |
doFollow |
|
Custom Loss Functions
|
Intern |
doFollow |
|
Training Examples
|
Intern |
doFollow |
|
Model Distillation
|
Intern |
doFollow |
|
MarginMSE
|
Intern |
doFollow |
|
MS MARCO
|
Intern |
doFollow |
|
SparseMultipleNegativesRankingLoss
|
Intern |
doFollow |
|
Semantic Textual Similarity
|
Intern |
doFollow |
|
Training data
|
Intern |
doFollow |
|
Loss Function
|
Intern |
doFollow |
|
Natural Language Inference
|
Intern |
doFollow |
|
Data
|
Intern |
doFollow |
|
SpladeLoss
|
Intern |
doFollow |
|
Quora Duplicate Questions
|
Intern |
doFollow |
|
Training
|
Intern |
doFollow |
|
Information Retrieval
|
Intern |
doFollow |
|
SparseMultipleNegativesRankingLoss (MNRL)
|
Intern |
doFollow |
|
Inference & Evaluation
|
Intern |
doFollow |
|
Sentence Transformer
|
Intern |
doFollow |
|
SentenceTransformer
|
Intern |
doFollow |
|
SentenceTransformer
|
Intern |
doFollow |
|
SentenceTransformerModelCardData
|
Intern |
doFollow |
|
SimilarityFunction
|
Intern |
doFollow |
|
Trainer
|
Intern |
doFollow |
|
SentenceTransformerTrainer
|
Intern |
doFollow |
|
Training Arguments
|
Intern |
doFollow |
|
SentenceTransformerTrainingArguments
|
Intern |
doFollow |
|
Losses
|
Intern |
doFollow |
|
BatchAllTripletLoss
|
Intern |
doFollow |
|
BatchHardSoftMarginTripletLoss
|
Intern |
doFollow |
|
BatchHardTripletLoss
|
Intern |
doFollow |
|
BatchSemiHardTripletLoss
|
Intern |
doFollow |
|
ContrastiveLoss
|
Intern |
doFollow |
|
OnlineContrastiveLoss
|
Intern |
doFollow |
|
ContrastiveTensionLoss
|
Intern |
doFollow |
|
ContrastiveTensionLossInBatchNegatives
|
Intern |
doFollow |
|
CoSENTLoss
|
Intern |
doFollow |
|
AnglELoss
|
Intern |
doFollow |
|
CosineSimilarityLoss
|
Intern |
doFollow |
|
DenoisingAutoEncoderLoss
|
Intern |
doFollow |
|
GISTEmbedLoss
|
Intern |
doFollow |
|
CachedGISTEmbedLoss
|
Intern |
doFollow |
|
MSELoss
|
Intern |
doFollow |
|
MarginMSELoss
|
Intern |
doFollow |
|
MatryoshkaLoss
|
Intern |
doFollow |
|
Matryoshka2dLoss
|
Intern |
doFollow |
|
AdaptiveLayerLoss
|
Intern |
doFollow |
|
MegaBatchMarginLoss
|
Intern |
doFollow |
|
MultipleNegativesRankingLoss
|
Intern |
doFollow |
|
CachedMultipleNegativesRankingLoss
|
Intern |
doFollow |
|
MultipleNegativesSymmetricRankingLoss
|
Intern |
doFollow |
|
CachedMultipleNegativesSymmetricRankingLoss
|
Intern |
doFollow |
|
SoftmaxLoss
|
Intern |
doFollow |
|
TripletLoss
|
Intern |
doFollow |
|
DistillKLDivLoss
|
Intern |
doFollow |
|
Samplers
|
Intern |
doFollow |
|
BatchSamplers
|
Intern |
doFollow |
|
MultiDatasetBatchSamplers
|
Intern |
doFollow |
|
Evaluation
|
Intern |
doFollow |
|
BinaryClassificationEvaluator
|
Intern |
doFollow |
|
EmbeddingSimilarityEvaluator
|
Intern |
doFollow |
|
InformationRetrievalEvaluator
|
Intern |
doFollow |
|
NanoBEIREvaluator
|
Intern |
doFollow |
|
MSEEvaluator
|
Intern |
doFollow |
|
ParaphraseMiningEvaluator
|
Intern |
doFollow |
|
RerankingEvaluator
|
Intern |
doFollow |
|
SentenceEvaluator
|
Intern |
doFollow |
|
SequentialEvaluator
|
Intern |
doFollow |
|
TranslationEvaluator
|
Intern |
doFollow |
|
TripletEvaluator
|
Intern |
doFollow |
|
Datasets
|
Intern |
doFollow |
|
ParallelSentencesDataset
|
Intern |
doFollow |
|
SentenceLabelDataset
|
Intern |
doFollow |
|
DenoisingAutoEncoderDataset
|
Intern |
doFollow |
|
NoDuplicatesDataLoader
|
Intern |
doFollow |
|
Modules
|
Intern |
doFollow |
|
Main Modules
|
Intern |
doFollow |
|
Further Modules
|
Intern |
doFollow |
|
Base Modules
|
Intern |
doFollow |
|
quantization
|
Intern |
doFollow |
|
Cross Encoder
|
Intern |
doFollow |
|
CrossEncoder
|
Intern |
doFollow |
|
CrossEncoder
|
Intern |
doFollow |
|
CrossEncoderModelCardData
|
Intern |
doFollow |
|
Trainer
|
Intern |
doFollow |
|
CrossEncoderTrainer
|
Intern |
doFollow |
|
Training Arguments
|
Intern |
doFollow |
|
CrossEncoderTrainingArguments
|
Intern |
doFollow |
|
Losses
|
Intern |
doFollow |
|
BinaryCrossEntropyLoss
|
Intern |
doFollow |
|
CrossEntropyLoss
|
Intern |
doFollow |
|
LambdaLoss
|
Intern |
doFollow |
|
ListMLELoss
|
Intern |
doFollow |
|
PListMLELoss
|
Intern |
doFollow |
|
ListNetLoss
|
Intern |
doFollow |
|
MultipleNegativesRankingLoss
|
Intern |
doFollow |
|
CachedMultipleNegativesRankingLoss
|
Intern |
doFollow |
|
MSELoss
|
Intern |
doFollow |
|
MarginMSELoss
|
Intern |
doFollow |
|
RankNetLoss
|
Intern |
doFollow |
|
Evaluation
|
Intern |
doFollow |
|
CrossEncoderRerankingEvaluator
|
Intern |
doFollow |
|
CrossEncoderNanoBEIREvaluator
|
Intern |
doFollow |
|
CrossEncoderClassificationEvaluator
|
Intern |
doFollow |
|
CrossEncoderCorrelationEvaluator
|
Intern |
doFollow |
|
Sparse Encoder
|
Intern |
doFollow |
|
SparseEncoder
|
Intern |
doFollow |
|
SparseEncoder
|
Intern |
doFollow |
|
SparseEncoderModelCardData
|
Intern |
doFollow |
|
SimilarityFunction
|
Intern |
doFollow |
|
Trainer
|
Intern |
doFollow |
|
SparseEncoderTrainer
|
Intern |
doFollow |
|
Training Arguments
|
Intern |
doFollow |
|
SparseEncoderTrainingArguments
|
Intern |
doFollow |
|
Losses
|
Intern |
doFollow |
|
SpladeLoss
|
Intern |
doFollow |
|
FlopsLoss
|
Intern |
doFollow |
|
CSRLoss
|
Intern |
doFollow |
|
CSRReconstructionLoss
|
Intern |
doFollow |
|
SparseMultipleNegativesRankingLoss
|
Intern |
doFollow |
|
SparseMarginMSELoss
|
Intern |
doFollow |
|
SparseDistillKLDivLoss
|
Intern |
doFollow |
|
SparseTripletLoss
|
Intern |
doFollow |
|
SparseCosineSimilarityLoss
|
Intern |
doFollow |
|
SparseCoSENTLoss
|
Intern |
doFollow |
|
SparseAnglELoss
|
Intern |
doFollow |
|
SparseMSELoss
|
Intern |
doFollow |
|
Evaluation
|
Intern |
doFollow |
|
SparseInformationRetrievalEvaluator
|
Intern |
doFollow |
|
SparseNanoBEIREvaluator
|
Intern |
doFollow |
|
SparseEmbeddingSimilarityEvaluator
|
Intern |
doFollow |
|
SparseBinaryClassificationEvaluator
|
Intern |
doFollow |
|
SparseTripletEvaluator
|
Intern |
doFollow |
|
SparseRerankingEvaluator
|
Intern |
doFollow |
|
SparseTranslationEvaluator
|
Intern |
doFollow |
|
SparseMSEEvaluator
|
Intern |
doFollow |
|
ReciprocalRankFusionEvaluator
|
Intern |
doFollow |
|
Modules
|
Intern |
doFollow |
|
SPLADE Pooling
|
Intern |
doFollow |
|
MLM Transformer
|
Intern |
doFollow |
|
SparseAutoEncoder
|
Intern |
doFollow |
|
SparseStaticEmbedding
|
Intern |
doFollow |
|
Callbacks
|
Intern |
doFollow |
|
SpladeRegularizerWeightSchedulerCallback
|
Intern |
doFollow |
|
Search Engines
|
Intern |
doFollow |
|
util
|
Intern |
doFollow |
|
Helper Functions
|
Intern |
doFollow |
|
Model Optimization
|
Intern |
doFollow |
|
Sentence Transformers
|
Intern |
doFollow |
|
Edit on GitHub
|
Extern |
doFollow |
|
v5.2
|
Extern |
doFollow |
|
v5.2.1
|
Extern |
doFollow |
|
v5.2.2
|
Extern |
doFollow |
|
v5.2.3
|
Extern |
doFollow |
|
UKP Lab
|
Extern |
doFollow |
|
🤗 Hugging Face
|
Extern |
doFollow |
|
full announcement
|
Extern |
doFollow |
|
|
Intern |
doFollow |
|
10,000 pre-trained Sentence Transformers models
|
Extern |
doFollow |
|
Massive Text Embeddings Benchmark (MTEB) leaderboard
|
Extern |
doFollow |
|
Sentence Transformers repository
|
Extern |
doFollow |
|
|
Intern |
doFollow |
|
|
Intern |
doFollow |
|
|
Intern |
doFollow |
|
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
|
Extern |
doFollow |
|
Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation
|
Extern |
doFollow |
|
data augmentation
|
Extern |
doFollow |
|
Augmented SBERT: Data Augmentation Method for Improving Bi-Encoders for Pairwise Sentence Scoring Tasks
|
Extern |
doFollow |
|
Sphinx
|
Extern |
doFollow |
|
theme
|
Extern |
doFollow |
|
Read the Docs
|
Extern |
doFollow |