Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pull] master from deepchem:master #2

Open
wants to merge 2,536 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
2536 commits
Select commit Hold shift + click to select a range
cb1b49a
docs: ProgressiveMultitaskModel
gauthamk02 Feb 9, 2024
67ba02f
progressivemultitask add missing decorator to test
gauthamk02 Feb 9, 2024
d752386
deprecate n_outputs and fix docs
gauthamk02 Feb 28, 2024
7f4ba07
Merge pull request #3817 from GreatRSingh/dft-dev-3
rbharath Mar 5, 2024
1c52e98
Merge pull request #3845 from aaronrockmenezes/cell-counting-tut
rbharath Mar 5, 2024
d84bc6e
Merge pull request #3846 from arunppsg/docs
rbharath Mar 5, 2024
c6a33c2
remove invalid line in hf-models (#3866)
arunppsg Mar 5, 2024
3ae2d82
base class rootfinder
sudo-rsingh Feb 11, 2024
e2d20e4
broyedn1
sudo-rsingh Feb 12, 2024
3731d1a
Added extra note for split-ratio of the dataset.
karannb Mar 6, 2024
27e8b98
Fix failing torch tests due to missing `pydantic` package
shreyasvinaya Mar 6, 2024
d95bfa4
Merge branch 'master' of https://github.com/deepchem/deepchem into ci…
shreyasvinaya Mar 6, 2024
0f3546d
remove meeko repo, fix two letter atoms support, improve tests
JoseAntonioSiguenza Mar 6, 2024
3373192
fix yapf
JoseAntonioSiguenza Mar 6, 2024
24b087b
rootfinder docs [skip ci]
sudo-rsingh Mar 6, 2024
1b193bf
Update Working_With_Splitters.ipynb
Dragonwagon18 Mar 6, 2024
a51f670
Merge pull request #3883 from shreyasvinaya/ci-fixes
rbharath Mar 6, 2024
5d2a2a6
update tests
JoseAntonioSiguenza Mar 6, 2024
4982880
lower exhaustiveness in pose generation test
JoseAntonioSiguenza Mar 6, 2024
0ed0da1
fix yapf
JoseAntonioSiguenza Mar 6, 2024
dc29c73
assert lower score to avoid flakiness failures
JoseAntonioSiguenza Mar 7, 2024
6aecde4
Merge pull request #3881 from JoseAntonioSiguenza/updates_pose_genera…
rbharath Mar 7, 2024
019bb1f
typo fixes
Dragonwagon18 Mar 7, 2024
ef362b4
Merge branch 'master' of https://github.com/deepchem/deepchem into ci…
shreyasvinaya Mar 7, 2024
c5936f9
add flaky to tests that fail sometimes
shreyasvinaya Mar 7, 2024
47e25bb
add `test_weave_singletask_classification_overfit` to flaky
shreyasvinaya Mar 7, 2024
24272a9
fix flaky for `test_GNN_context_pred`
shreyasvinaya Mar 7, 2024
f5f0cb9
fix sparse matrix depracation warning
shreyasvinaya Mar 7, 2024
ad75d2e
fix `pkg_resources` deprecation warning
shreyasvinaya Mar 7, 2024
20d3143
matminer version fixes for python 3.8, 3.8+
shreyasvinaya Mar 7, 2024
6b64dbb
making corresponding changes to unittests
shreyasvinaya Mar 7, 2024
6e3f310
bump up torch version to 2.2.1
shreyasvinaya Mar 7, 2024
c059dad
bump up torch version to 2.2.1
shreyasvinaya Mar 7, 2024
cfb895d
bump up torch version to 2.2.1
shreyasvinaya Mar 7, 2024
e743e64
fix matminer versions
shreyasvinaya Mar 7, 2024
9dc5bfb
formatting and initialisation
sudo-rsingh Mar 7, 2024
0345f7e
Merge pull request #3816 from shreyasvinaya/Molgan-torch
rbharath Mar 7, 2024
37abfd1
added broyden1 docs and all tests
sudo-rsingh Mar 8, 2024
47cb1f0
Fix windows Build script
shreyasvinaya Mar 8, 2024
0c8db87
fix build
shreyasvinaya Mar 8, 2024
d9f97bd
bump up cuda version
shreyasvinaya Mar 8, 2024
6c4fe83
bump up torch version for pyg for mac
shreyasvinaya Mar 8, 2024
7bf9cf3
add dglgo as suggested on dgl website
shreyasvinaya Mar 8, 2024
6bbd046
dgl fix for windows
shreyasvinaya Mar 8, 2024
8a84bae
revert dglgo
shreyasvinaya Mar 8, 2024
61ef637
formatting fix
sudo-rsingh Mar 8, 2024
2722a22
Doctest Fix for SAMLoader, BAMLoader, CRAMLoader
KitVB Mar 8, 2024
ba351d2
formatting
sudo-rsingh Mar 8, 2024
3a1eabe
remove TF from windows build script testing
shreyasvinaya Mar 9, 2024
3fefe9d
fix mxmnet error
shreyasvinaya Mar 11, 2024
4c39f0f
Merge pull request #3890 from KitVB/docfix
rbharath Mar 11, 2024
80af2a8
adding tensorflow pin as tf 2.16 introduces keras 3
shreyasvinaya Mar 11, 2024
f52737c
Merge branch 'ci-fixes' of https://github.com/shreyasvinaya/deepchem …
shreyasvinaya Mar 11, 2024
19f5d1d
Merge branch 'master' of https://github.com/deepchem/deepchem into ci…
shreyasvinaya Mar 11, 2024
d244967
addint TF pin to docs to fix issue
shreyasvinaya Mar 11, 2024
124524f
Merge pull request #3822 from gauthamk02/port-progressive-multitask-m…
rbharath Mar 11, 2024
e578c3a
Merge pull request #3837 from Dragonwagon18/bibhu-contrib
rbharath Mar 11, 2024
6e4182d
Merge pull request #3876 from GreatRSingh/dft-dev-4
rbharath Mar 11, 2024
17dca60
Merge branch 'deepchem:master' into ci-fixes
shreyasvinaya Mar 12, 2024
88d9530
fix title underlines
shreyasvinaya Mar 12, 2024
6f5fcc9
docsfixes
shreyasvinaya Mar 12, 2024
c947e6a
adding pydantic to docs build as it was missing
shreyasvinaya Mar 12, 2024
a559198
add missing fastqloader to loaders
shreyasvinaya Mar 12, 2024
cc21683
Merge pull request #3885 from shreyasvinaya/ci-fixes
rbharath Mar 12, 2024
c5b812e
patch fix for `tfp`, pinning it to 0.23 as 0.24 uses tf 2.16 which us…
shreyasvinaya Mar 13, 2024
f927b62
Merge branch 'master' into patch-10
shreyasvinaya Mar 13, 2024
04f5eec
remove pytest pin as flaky pushed a patch
shreyasvinaya Mar 15, 2024
60912f6
remove android from CI runners to prevent out of space errors
shreyasvinaya Mar 15, 2024
eef32b0
Merge pull request #3832 from karannb/update_tutorials
rbharath Mar 18, 2024
fa65d4c
Merge branch 'deepchem:master' into ci-fixes
shreyasvinaya Mar 18, 2024
f4e2028
update docker build push action version
shreyasvinaya Mar 19, 2024
798b5dc
bump up docker base img versions
shreyasvinaya Mar 19, 2024
eacba6f
bump up cuda to 11.8
shreyasvinaya Mar 19, 2024
ae72a37
bump up setup-python version to resolve deprecation warning
shreyasvinaya Mar 19, 2024
609d515
bump up copyright year
shreyasvinaya Mar 19, 2024
36f90e4
bump up python versions in intallation doc
shreyasvinaya Mar 19, 2024
7f19765
bump up python versions in CI doc
shreyasvinaya Mar 19, 2024
bb7a973
a lot of docfixes
shreyasvinaya Mar 19, 2024
8717336
flake8 fixes
shreyasvinaya Mar 19, 2024
095d642
fix yapf
shreyasvinaya Mar 19, 2024
e3708eb
Added TextCNN wrapper TorchModel
Shiva-sankaran Mar 2, 2024
9ae5870
fixed code formatting
Shiva-sankaran Mar 3, 2024
fbccec3
Updated docs
Shiva-sankaran Mar 5, 2024
4561dfe
Added missing docs and type annotations to support functions
Shiva-sankaran Mar 10, 2024
8bd62b0
reverted replacing the old TF model doc with the new torch model
Shiva-sankaran Mar 11, 2024
4e3d5d2
Added TextCNN wrapper TorchModel
Shiva-sankaran Mar 2, 2024
d03044e
fixed code formatting
Shiva-sankaran Mar 3, 2024
1b206e9
Updated docs
Shiva-sankaran Mar 5, 2024
0592c07
Added missing docs and type annotations to support functions
Shiva-sankaran Mar 10, 2024
a4e7538
reverted replacing the old TF model doc with the new torch model
Shiva-sankaran Mar 11, 2024
5761ddc
Added both Keras/PyTorch implementation to docs
Shiva-sankaran Mar 12, 2024
3eccafe
Merge pull request #3896 from shreyasvinaya/ci-fixes
rbharath Mar 19, 2024
87ea675
fix docker build v2
shreyasvinaya Mar 20, 2024
ba6ba9b
update docker python versions
shreyasvinaya Mar 20, 2024
85ff2b3
Merge pull request #3877 from Shiva-sankaran/TextCNN_torch
rbharath Mar 21, 2024
3e6457c
Merge pull request #3907 from shreyasvinaya/ci-fixes
rbharath Mar 22, 2024
0a1cfc3
Merge pull request #3887 from shreyasvinaya/patch-10
rbharath Mar 22, 2024
794d510
Bump torch cuda version to 11.8
shreyasvinaya Mar 24, 2024
1d4dbc3
Merge pull request #3917 from shreyasvinaya/ci-fixes
rbharath Mar 26, 2024
0a858d0
bump up deepchem version to 2.8.0
shreyasvinaya Apr 1, 2024
8815a8d
fix storage space
shreyasvinaya Apr 1, 2024
88ff3cb
bump up github action
shreyasvinaya Apr 1, 2024
107474a
fix inconsistencies
shreyasvinaya Apr 1, 2024
e2c21a7
fix doctest seeding
shreyasvinaya Apr 1, 2024
92bd584
doctest patch for examples
shreyasvinaya Apr 1, 2024
5f6dcd0
fix table in getting started/requirements
shreyasvinaya Apr 1, 2024
0b42ffd
bump up example python versions
shreyasvinaya Apr 1, 2024
efa85ff
Merge pull request #3936 from shreyasvinaya/v2.8.0
rbharath Apr 2, 2024
7e29bcd
storage fix + bump up actions version
shreyasvinaya Apr 2, 2024
dba49fb
bump up base version of python for docker release
shreyasvinaya Apr 2, 2024
d5b2939
Merge pull request #3937 from shreyasvinaya/v2.8.0
rbharath Apr 2, 2024
4f3367e
Add 2.8.1 dev tag for development (#3939)
shreyasvinaya Apr 5, 2024
0bcb12b
ScScore Porting (#3692)
aaronrockmenezes Apr 8, 2024
7dbd6ea
Adding loss classes for A2C (#3938)
NimishaDey Apr 8, 2024
ffe6d6b
Adding A2C class (#3944)
NimishaDey Apr 10, 2024
d1136ff
Remove tutorial number from NormalizingFlow tutorial (#3948)
rida151 Apr 14, 2024
4464190
Fix Tutorial-Introduction to MoleculeNet (#3902)
rida151 Apr 14, 2024
352f8ef
Dft Part 5 (#3895)
sudo-rsingh Apr 19, 2024
13b7099
Addition of U-Net Model (#3919)
aaronrockmenezes Apr 19, 2024
2172557
adding polymer tutorial (#3930)
TRY-ER Apr 24, 2024
8d3a6fa
Adding PPO class (#3954)
NimishaDey Apr 24, 2024
20bf4f1
fix#3746 fit_generator not accepting variables as generaators (#3950)
gauthamk02 Apr 24, 2024
c448312
DFT Part - 6 (#3961)
sudo-rsingh Apr 24, 2024
28195eb
Flows - Affine and Masked Affine (#3949)
shreyasvinaya Apr 24, 2024
867eece
Pass loss in model training callbacks (#3963)
arunppsg Apr 30, 2024
9f70cfa
Add ConsScaleLayer, MLP for flows, Clamp exp (#3964)
shreyasvinaya May 1, 2024
def0649
add torch-cluster to env (#3967)
shreyasvinaya May 7, 2024
211c4ef
test for callbacks (#3969)
arunppsg May 15, 2024
67e662b
Differentiation Infrastructure in Deepchem Tutorial (#3912)
sudo-rsingh May 17, 2024
996bde3
Porting GraphConv layer to PyTorch (#3960)
NimishaDey May 17, 2024
d1f90d2
CI fixes:mypy (#3979)
NimishaDey May 24, 2024
d4cc476
Porting GraphPool layer to PyTorch (#3976)
NimishaDey May 29, 2024
b348694
Adding RL tutorial (#3968)
NimishaDey May 29, 2024
3070da4
Fixing mypy errors (#3988)
NimishaDey Jun 3, 2024
04f4e47
DFT PR - 7 (#3974)
sudo-rsingh Jun 3, 2024
dccc7ef
Deepchem website rebuild trigger (#3981)
Cannon07 Jun 3, 2024
0508ce7
add: basic implementation of torch.compile (#3987)
gauthamk02 Jun 5, 2024
fae2311
Made all changes. (#3997)
karannb Jun 5, 2024
4661f91
Fix Linting Test (#4005)
shreyasvinaya Jun 7, 2024
ebb7824
Add OpenBlas for dqclibs building (#3996)
sudo-rsingh Jun 7, 2024
d3491a0
pin numpy to version<2 to prevent rdkit errors (#4013)
shreyasvinaya Jun 17, 2024
fe3b2f5
Adding ProtBERT (#3985)
Shiva-sankaran Jun 19, 2024
8624ddc
rk (#4008)
sudo-rsingh Jun 21, 2024
b040742
fixes (#4019)
sudo-rsingh Jun 21, 2024
a4fd285
Protac tutorial (#4000)
david-zhang03 Jun 21, 2024
b6754ad
add: remaining modes to torch.compile (#4006)
gauthamk02 Jun 21, 2024
78123c9
Add normalizingflow model (#3998)
shreyasvinaya Jun 24, 2024
df85e70
Porting Graph Gather Layer to PyTorch (#3990)
NimishaDey Jun 24, 2024
03ad0f0
fixing unit tests and doc test imports (#4023)
TRY-ER Jun 24, 2024
99ccdc3
Made changes to image transformer. (#3975)
aaronrockmenezes Jun 24, 2024
0355e3d
add: torch compile tutorial (#4031)
gauthamk02 Jul 1, 2024
c738d88
DeepVariant 1 (generating pileups with pysam) (#4027)
KitVB Jul 1, 2024
aade4ce
minor doc fix (#4028)
harishwar017 Jul 3, 2024
7b7c19c
Porting GraphConvTorchModel to PyTorch (#4033)
NimishaDey Jul 3, 2024
140ca15
split commit for base abstract polymer featurizer (#4016)
TRY-ER Jul 3, 2024
b494d74
Intro bindingsites (#4022)
elisagdelope Jul 3, 2024
5478e9f
Fixes doctest errors (#4042)
Shiva-sankaran Jul 5, 2024
d48831a
Updated Antibody Language Model Tutorial (#4044)
dhuvik Jul 5, 2024
c426e77
Smiles2vec Model Porting (nn.module) (#4039)
harishwar017 Jul 5, 2024
fc6c139
Weighted Directed Graph Data Inclusion (#4017)
TRY-ER Jul 8, 2024
a8c15a2
Protein structure prediction with ESMFold tutorial (#4030)
anamika-yadav99 Jul 8, 2024
b899f7f
Added learning rate schedulers with warmups (#4050)
arunppsg Jul 9, 2024
9a35043
ODE Solver Tutorial (#4025)
sudo-rsingh Jul 10, 2024
abe5e5d
small fix to get more info from pysam pileups (#4053)
KitVB Jul 10, 2024
fc588a1
fix doctest (#4064)
arunppsg Jul 23, 2024
9a1b0ce
Prot bert custom classfier (#4052)
Shiva-sankaran Jul 24, 2024
b30e9d9
ADD: figures citations & improved intro (#4043)
elisagdelope Jul 24, 2024
5874d38
Add MXMNET model and its test (#3970)
riya-singh28 Jul 24, 2024
4cdf544
Adding Graph Conv Model class (#4063)
NimishaDey Jul 26, 2024
0a10305
adding part (1/3) of deepvariant realigner featurizer (#4068)
KitVB Jul 26, 2024
58c8ba8
Added SMILES tokenization section to tutorial (#4056)
frenio Jul 26, 2024
479ce20
modify: compile tutorial (#4062)
gauthamk02 Jul 29, 2024
9b4fd35
Small modification (#4072)
NimishaDey Jul 29, 2024
6ec51df
ProtBERT fix (#4075)
Shiva-sankaran Aug 2, 2024
fb5419d
Crystallization Tendency Regression Tutorial (#4070)
TRY-ER Aug 2, 2024
3241eef
Added ProtBERT tutorial (#4041)
Shiva-sankaran Aug 2, 2024
8043792
Ci fix (#4083)
Shiva-sankaran Aug 5, 2024
033b795
polymer represent wdgraph elaborated (#4088)
TRY-ER Aug 7, 2024
051a199
Dqc fix final (#4095)
sudo-rsingh Aug 9, 2024
64382bd
Linting fix (#4086)
Shiva-sankaran Aug 9, 2024
af5506e
Tutorial on PSMILES (#4097)
TRY-ER Aug 12, 2024
3235497
Dft 8 final (#4074)
sudo-rsingh Aug 12, 2024
8658b26
druggability assessment tutorial (#4069)
anamika-yadav99 Aug 12, 2024
8308b67
Fix deepchem Torch CI (#4100)
shreyasvinaya Aug 16, 2024
172a28e
Fixing image rendering for graph tutorial (#4093)
TRY-ER Aug 19, 2024
63d9bfd
Rebased Protein LM Tutorial w. Elisa (#4082)
dhuvik Aug 19, 2024
2a63b76
ESM-2 Fine-Tuning for Protein Binding Sites Prediction Tutorial + Uni…
elisagdelope Aug 21, 2024
cd4b370
Fill Mask Pipeline to HuggingFace Model (#4092)
dhuvik Aug 21, 2024
472c255
adding functionality to find candidate windows (#4102)
KitVB Aug 21, 2024
1ae0b66
Torch Model and tests for smiles2vec (#4045)
harishwar017 Aug 23, 2024
3c010b6
Tutorial on PolyBERT (#4105)
TRY-ER Aug 23, 2024
5cd9e6f
Cleanup-1 [termination condition and init file] (#4109)
sudo-rsingh Sep 6, 2024
632c427
fix to get accurate pileups with pysam (#4121)
KitVB Sep 11, 2024
eef62ce
Cleanup - 2 (#4113)
sudo-rsingh Sep 11, 2024
b5c951f
Cleanup 3 [Requirements] (#4114)
sudo-rsingh Sep 16, 2024
1cc4ca0
comment protbert test (#4122)
sudo-rsingh Sep 20, 2024
d83804e
Antibody Modeling (WIP) (#4106)
dhuvik Sep 20, 2024
86a18fd
fixing the render issue for PSMILES tutorial (#4124)
TRY-ER Sep 20, 2024
73c877a
added polymer tutorials to the readme (#4125)
TRY-ER Sep 20, 2024
8c46096
Added PyTorch implementation for creating custom graph convolution in…
spellsharp Oct 7, 2024
c0e5f91
Added attribute descriptors to RDKitDescriptors (#4138)
frenio Oct 13, 2024
97240bc
fixing pyparsing issue in unit-tests by pinning to supported version …
TRY-ER Oct 14, 2024
055ce0b
Weighted Directed Graph Validator Setup (#4020)
TRY-ER Oct 14, 2024
d826c66
modified wrong comment in tutorial and output section (#4140)
TRY-ER Oct 15, 2024
ec22084
added functionality to align reads using ssw algorithm (#4142)
KitVB Oct 16, 2024
c614f9e
fix mtr finetuning (#4143)
riya-singh28 Oct 17, 2024
295431a
Tutorial bioinfo (#4080)
Harindhar10 Oct 18, 2024
dd0ea98
Update notebook reference (#4024)
emmanuel-ferdman Oct 21, 2024
42d443e
Add MoLFormer model to DeepChem (#4145)
riya-singh28 Oct 24, 2024
02ee67a
Added Pileup Featurizer to get pileup images from haplotype windows (…
KitVB Oct 28, 2024
183b6cd
doc fix (#4156)
KitVB Oct 30, 2024
e8497af
Wdg poly primary util (#4021)
TRY-ER Oct 30, 2024
11d3932
Robust baseclass (#4154)
spellsharp Oct 31, 2024
b30a523
Adding OneFormer model to DeepChem (#4146)
aaronrockmenezes Nov 6, 2024
bc136da
Adds lamb optimizer to Deepchem (#4168)
riya-singh28 Nov 12, 2024
be83685
Bug Fixs nov 2024 (#4175)
shreyasvinaya Nov 12, 2024
f44e45b
Adding HF CI (#4173)
Shiva-sankaran Nov 12, 2024
2d8d0bf
Add InceptionV3 model for deepvariant (#4167)
KitVB Nov 18, 2024
b9ae2c5
Progressive multitask patch (#4177)
spellsharp Nov 18, 2024
2b0b99d
Add layers and docs for SE(3) Transformer implemention (#4179)
JoseAntonioSiguenza Nov 20, 2024
b2a130e
Robust classifier (#4159)
spellsharp Nov 20, 2024
4a29b98
added overfit test, updated documentation, fixed yapf and flake8 erro…
KitVB Nov 25, 2024
cbd54e8
Robust regressor (#4160)
spellsharp Nov 25, 2024
425d432
add batch_normalize=False (#4191)
prasanth30 Dec 9, 2024
81a2726
Fix failing CI flake8 and mypy checks (#4190)
JoseAntonioSiguenza Dec 11, 2024
a855d16
irv_4 (#4187)
Harindhar10 Dec 13, 2024
348ca63
Fix chemberta for Multitask Classification (#4194)
riya-singh28 Dec 17, 2024
216faf6
Fix MoLFormer for finetuning (#4195)
riya-singh28 Dec 17, 2024
7207078
swapping torch code for tutorial (#4200)
manas1245agrawal Dec 18, 2024
2c19692
Parallelized ODE Solver (#4164)
aaronrockmenezes Dec 24, 2024
73b62f8
Minor fixes in DeepVariant Featurizers (#4198)
KitVB Dec 24, 2024
b5c1700
Removes "module." prefix from the state_dict keys of models trained u…
riya-singh28 Jan 7, 2025
b101324
Doctest fix for lamb optimizer (#4202)
riya-singh28 Jan 7, 2025
ff71545
`EquivariantGraphFeaturizer` for molecular data (#4223)
JoseAntonioSiguenza Jan 8, 2025
76f56da
Ported the existing TensorFlow cosine similarity function to PyTorch…
Dragonwagon18 Jan 15, 2025
0bdbb9a
Bug fix nov 24 v2 (#4176)
shreyasvinaya Jan 17, 2025
6e49b04
Compatibility issue solved (#4234)
yash-gt08 Jan 19, 2025
8f387d1
Fixing tutorial (#4237)
manas1245agrawal Jan 22, 2025
2305341
Fixing tutorial (#4248)
manas1245agrawal Jan 24, 2025
0a93e6c
BLAS error fix for cmake (#4251)
shreyasvinaya Jan 29, 2025
23e126a
Atomic contribution notebook fix (#4249)
bhuvanmdev Jan 29, 2025
79609e3
Dag layers ported (#4238)
bhuvanmdev Jan 31, 2025
642ad6e
fix dqc test in CI (#4264)
JoseAntonioSiguenza Feb 3, 2025
685df4b
Added SphericalHarmonics and irreps, and fixed wigner_D doctest in eq…
JoseAntonioSiguenza Feb 3, 2025
4d04346
rebased BAMLoader optimization (#4257)
KitVB Feb 3, 2025
4fd86b0
PINNModel ported to PyTorch (#4206)
spellsharp Feb 3, 2025
1ab7663
fixed the ODE tutorial (#4262)
a-b-h-a-y-s-h-i-n-d-e Feb 3, 2025
d406a91
CI fixes on dqc test, PINN imports and mypy errors (#4278)
JoseAntonioSiguenza Feb 10, 2025
bd35b93
Add equivariance utils functions to compute weight basis for SE(3)-Tr…
JoseAntonioSiguenza Feb 10, 2025
85b8d1d
init commit (#4270)
bhuvanmdev Feb 10, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Robust classifier (deepchem#4159)
* Robust classifier

* Adds more tasks to the forward test for architecture similarity

* Resolves conflict and passes lint tests

* Fixes input label processing using default generator

* Updates weights files with more appropriate ones with more tasks

* Formats for yapf and flake8

* Added tests for classification

* Fixes reload test

* Added PyTorch robustmultitask classifier model to rst

* Added docstring to default generator and logs training per epoch

* Removes logger and docstring

* Adds docstring to default generator and logs epoch training
  • Loading branch information
spellsharp authored Nov 20, 2024
commit b2a130edca0beaa1ba5407f3119dbb6922093a73
2 changes: 1 addition & 1 deletion deepchem/models/torch_models/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
from deepchem.models.torch_models.unet import UNet, UNetModel
from deepchem.models.torch_models.graphconvmodel import _GraphConvTorchModel, GraphConvModel
from deepchem.models.torch_models.smiles2vec import Smiles2Vec, Smiles2VecModel
from deepchem.models.torch_models.robust_multitask import RobustMultitask
from deepchem.models.torch_models.robust_multitask import RobustMultitask, RobustMultitaskClassifier
from deepchem.models.torch_models.inceptionv3 import InceptionV3Model, InceptionA, InceptionB, InceptionC, InceptionD, InceptionE, InceptionAux, BasicConv2d
try:
from deepchem.models.torch_models.dmpnn import DMPNN, DMPNNModel
Expand Down
161 changes: 160 additions & 1 deletion deepchem/models/torch_models/robust_multitask.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,11 @@
import logging
from typing import List, Tuple, Callable, Literal, Union
from typing import Sequence as SequenceCollection
from deepchem.models.torch_models.torch_model import TorchModel
from deepchem.models import losses
from deepchem.utils.typing import OneOrMany, ActivationFn
from deepchem.metrics import to_one_hot
import datetime

logger = logging.getLogger(__name__)

Expand Down Expand Up @@ -205,7 +209,6 @@ def forward(
task_outputs.append(task_output)

output = torch.stack(task_outputs, dim=1)

if self.mode == 'classification':
if self.n_tasks == 1:
logits = output.view(-1, self.n_classes)
Expand Down Expand Up @@ -251,3 +254,159 @@ def _get_activation_class(self, activation_name: ActivationFn) -> Callable:
raise ValueError(
f"Invalid activation function: {activation_name}. Only activations of type nn.Module"
)


class RobustMultitaskClassifier(TorchModel):
"""
Implements a neural network for robust multitasking.
The key idea of this model is to have bypass layers that feed
directly from features to task output. This might provide some
flexibility toroute around challenges in multitasking with
destructive interference.
References
----------
This technique was introduced in [1]_
.. [1] Ramsundar, Bharath, et al. "Is multitask deep learning practical for pharma?." Journal of chemical information and modeling 57.8 (2017): 2068-2076.
"""

def __init__(self,
n_tasks: int,
n_features: int,
layer_sizes: SequenceCollection[int] = [1000],
weight_init_stddevs: OneOrMany[float] = 0.02,
bias_init_consts: OneOrMany[float] = 1.0,
weight_decay_penalty: float = 0.0,
weight_decay_penalty_type: Literal['l1', 'l2'] = "l2",
dropouts: OneOrMany[float] = 0.5,
activation_fns: OneOrMany[ActivationFn] = nn.ReLU(),
n_classes: int = 2,
bypass_layer_sizes: SequenceCollection[int] = [100],
bypass_weight_init_stddevs: OneOrMany[float] = [0.02],
bypass_bias_init_consts: OneOrMany[float] = [1.0],
bypass_dropouts: OneOrMany[float] = [0.5],
**kwargs):
"""
Parameters
----------
n_tasks: int
number of tasks
n_features: int
number of features
layer_sizes: list
the size of each dense layer in the network. The length of this list determines the number of layers.
weight_init_stddevs: list or float
the standard deviation of the distribution to use for weight initialization of each layer. The length
of this list should equal len(layer_sizes). Alternatively this may be a single value instead of a list,
in which case the same value is used for every layer.
bias_init_consts: list or loat
the value to initialize the biases in each layer to. The length of this list should equal len(layer_sizes).
Alternatively this may be a single value instead of a list, in which case the same value is used for every layer.
weight_decay_penalty: float
the magnitude of the weight decay penalty to use
weight_decay_penalty_type: str
the type of penalty to use for weight decay, either 'l1' or 'l2'
dropouts: list or float
the dropout probablity to use for each layer. The length of this list should equal len(layer_sizes).
Alternatively this may be a single value instead of a list, in which case the same value is used for every layer.
activation_fns: list or object
the Tensorflow activation function to apply to each layer. The length of this list should equal
len(layer_sizes). Alternatively this may be a single value instead of a list, in which case the
same value is used for every layer.
n_classes: int
the number of classes
bypass_layer_sizes: list
the size of each dense layer in the bypass network. The length of this list determines the number of bypass layers.
bypass_weight_init_stddevs: list or float
the standard deviation of the distribution to use for weight initialization of bypass layers.
same requirements as weight_init_stddevs
bypass_bias_init_consts: list or float
the value to initialize the biases in bypass layers
same requirements as bias_init_consts
bypass_dropouts: list or float
the dropout probablity to use for bypass layers.
same requirements as dropouts
"""
if not isinstance(activation_fns, nn.Module):
logger.warning(
"Warning: Activation functions should be of type nn.Module. Using default activation function: ReLU."
)
activation_fns = nn.ReLU()

# The labels are one-hot encoded.
loss = losses.SoftmaxCrossEntropy()
output_types = ['prediction', 'loss']
self.n_classes = n_classes
self.n_tasks = n_tasks

model = RobustMultitask(
n_tasks=self.n_tasks,
n_features=n_features,
layer_sizes=layer_sizes,
mode='classification',
weight_init_stddevs=weight_init_stddevs,
bias_init_consts=bias_init_consts,
weight_decay_penalty=weight_decay_penalty,
weight_decay_penalty_type=weight_decay_penalty_type,
activation_fns=activation_fns,
dropouts=dropouts,
n_classes=self.n_classes,
bypass_layer_sizes=bypass_layer_sizes,
bypass_weight_init_stddevs=bypass_weight_init_stddevs,
bypass_bias_init_consts=bypass_bias_init_consts,
bypass_dropouts=bypass_dropouts)
self.activation_fns = model.activation_fns
self.dropouts = model.dropouts
self.shared_layers = model.shared_layers
self.bypass_layers = model.bypass_layers
self.output_layers = model.output_layers

super(RobustMultitaskClassifier,
self).__init__(model,
loss,
output_types=output_types,
regularization_loss=model.regularization_loss,
**kwargs)

def default_generator(self,
dataset,
epochs=1,
mode='fit',
deterministic=True,
pad_batches=True):
"""Create a generator that iterates batches for a dataset.

Subclasses may override this method to customize how model inputs are
generated from the data.

Parameters
----------
dataset: Dataset
the data to iterate
epochs: int
the number of times to iterate over the full dataset
mode: str
allowed values are 'fit' (called during training), 'predict' (called
during prediction), and 'uncertainty' (called during uncertainty
prediction)
deterministic: bool
whether to iterate over the dataset in order, or randomly shuffle the
data for each epoch
pad_batches: bool
whether to pad each batch up to this model's preferred batch size

Returns
-------
a generator that iterates batches, each represented as a tuple of lists:
([inputs], [outputs], [weights])
"""
for epoch in range(epochs):
logger.info("Starting training for epoch %d at %s" %
(epoch, datetime.datetime.now().ctime()))
for (X_b, y_b, w_b,
ids_b) in dataset.iterbatches(batch_size=self.batch_size,
deterministic=deterministic,
pad_batches=pad_batches):
if y_b is not None:
y_b = to_one_hot(y_b.flatten(), self.n_classes).reshape(
-1, self.n_tasks, self.n_classes)
yield ([X_b], [y_b], [w_b])
Binary file not shown.
Binary file not shown.
Loading