Skip to content

SpectraFrame

pyspc.spectra.SpectraFrame

A class to represent unfolded spectral data

Source code in pyspc/spectra.py
 161
 162
 163
 164
 165
 166
 167
 168
 169
 170
 171
 172
 173
 174
 175
 176
 177
 178
 179
 180
 181
 182
 183
 184
 185
 186
 187
 188
 189
 190
 191
 192
 193
 194
 195
 196
 197
 198
 199
 200
 201
 202
 203
 204
 205
 206
 207
 208
 209
 210
 211
 212
 213
 214
 215
 216
 217
 218
 219
 220
 221
 222
 223
 224
 225
 226
 227
 228
 229
 230
 231
 232
 233
 234
 235
 236
 237
 238
 239
 240
 241
 242
 243
 244
 245
 246
 247
 248
 249
 250
 251
 252
 253
 254
 255
 256
 257
 258
 259
 260
 261
 262
 263
 264
 265
 266
 267
 268
 269
 270
 271
 272
 273
 274
 275
 276
 277
 278
 279
 280
 281
 282
 283
 284
 285
 286
 287
 288
 289
 290
 291
 292
 293
 294
 295
 296
 297
 298
 299
 300
 301
 302
 303
 304
 305
 306
 307
 308
 309
 310
 311
 312
 313
 314
 315
 316
 317
 318
 319
 320
 321
 322
 323
 324
 325
 326
 327
 328
 329
 330
 331
 332
 333
 334
 335
 336
 337
 338
 339
 340
 341
 342
 343
 344
 345
 346
 347
 348
 349
 350
 351
 352
 353
 354
 355
 356
 357
 358
 359
 360
 361
 362
 363
 364
 365
 366
 367
 368
 369
 370
 371
 372
 373
 374
 375
 376
 377
 378
 379
 380
 381
 382
 383
 384
 385
 386
 387
 388
 389
 390
 391
 392
 393
 394
 395
 396
 397
 398
 399
 400
 401
 402
 403
 404
 405
 406
 407
 408
 409
 410
 411
 412
 413
 414
 415
 416
 417
 418
 419
 420
 421
 422
 423
 424
 425
 426
 427
 428
 429
 430
 431
 432
 433
 434
 435
 436
 437
 438
 439
 440
 441
 442
 443
 444
 445
 446
 447
 448
 449
 450
 451
 452
 453
 454
 455
 456
 457
 458
 459
 460
 461
 462
 463
 464
 465
 466
 467
 468
 469
 470
 471
 472
 473
 474
 475
 476
 477
 478
 479
 480
 481
 482
 483
 484
 485
 486
 487
 488
 489
 490
 491
 492
 493
 494
 495
 496
 497
 498
 499
 500
 501
 502
 503
 504
 505
 506
 507
 508
 509
 510
 511
 512
 513
 514
 515
 516
 517
 518
 519
 520
 521
 522
 523
 524
 525
 526
 527
 528
 529
 530
 531
 532
 533
 534
 535
 536
 537
 538
 539
 540
 541
 542
 543
 544
 545
 546
 547
 548
 549
 550
 551
 552
 553
 554
 555
 556
 557
 558
 559
 560
 561
 562
 563
 564
 565
 566
 567
 568
 569
 570
 571
 572
 573
 574
 575
 576
 577
 578
 579
 580
 581
 582
 583
 584
 585
 586
 587
 588
 589
 590
 591
 592
 593
 594
 595
 596
 597
 598
 599
 600
 601
 602
 603
 604
 605
 606
 607
 608
 609
 610
 611
 612
 613
 614
 615
 616
 617
 618
 619
 620
 621
 622
 623
 624
 625
 626
 627
 628
 629
 630
 631
 632
 633
 634
 635
 636
 637
 638
 639
 640
 641
 642
 643
 644
 645
 646
 647
 648
 649
 650
 651
 652
 653
 654
 655
 656
 657
 658
 659
 660
 661
 662
 663
 664
 665
 666
 667
 668
 669
 670
 671
 672
 673
 674
 675
 676
 677
 678
 679
 680
 681
 682
 683
 684
 685
 686
 687
 688
 689
 690
 691
 692
 693
 694
 695
 696
 697
 698
 699
 700
 701
 702
 703
 704
 705
 706
 707
 708
 709
 710
 711
 712
 713
 714
 715
 716
 717
 718
 719
 720
 721
 722
 723
 724
 725
 726
 727
 728
 729
 730
 731
 732
 733
 734
 735
 736
 737
 738
 739
 740
 741
 742
 743
 744
 745
 746
 747
 748
 749
 750
 751
 752
 753
 754
 755
 756
 757
 758
 759
 760
 761
 762
 763
 764
 765
 766
 767
 768
 769
 770
 771
 772
 773
 774
 775
 776
 777
 778
 779
 780
 781
 782
 783
 784
 785
 786
 787
 788
 789
 790
 791
 792
 793
 794
 795
 796
 797
 798
 799
 800
 801
 802
 803
 804
 805
 806
 807
 808
 809
 810
 811
 812
 813
 814
 815
 816
 817
 818
 819
 820
 821
 822
 823
 824
 825
 826
 827
 828
 829
 830
 831
 832
 833
 834
 835
 836
 837
 838
 839
 840
 841
 842
 843
 844
 845
 846
 847
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
class SpectraFrame:
    """A class to represent unfolded spectral data"""

    # ----------------------------------------------------------------------
    # Constructor

    def __init__(  # noqa: C901
        self,
        spc: ArrayLike,
        wl: Optional[ArrayLike] = None,
        data: Union[pd.DataFrame, pd.Series, dict] = None,
    ) -> None:
        """Create a new SpectraFrame object

        Parameters
        ----------
        spc : ArrayLike
            Spectral data. A 2D array where each row represents a spectrum.
        wl : Optional[ArrayLike], optional
            Spectral coordinates, i.e. wavelengths, wavenumbers, etc.
            If None, then the range 0..N is used, by default None.
        data : Optional[pd.DataFrame], optional
            Additional meta-data, by default None

        Raises
        ------
        ValueError
            If the provided data or wl is not valid (i.e. wrong shape, etc.)
        ValueError
            If shapes do not match (e.g. number of rows in spc and data)

        Examples
        --------
        >>> np.random.seed(42)
        >>> sf = SpectraFrame(
        ...     np.random.rand(4,5),
        ...     wl=np.linspace(600,660,5),
        ...     data={"group": list("AABB")}
        ... )
        >>> print(sf)
              600.0     615.0     630.0     645.0     660.0 group
        0  0.374540  0.950714  0.731994  0.598658  0.156019     A
        1  0.155995  0.058084  0.866176  0.601115  0.708073     A
        2  0.020584  0.969910  0.832443  0.212339  0.181825     B
        3  0.183405  0.304242  0.524756  0.431945  0.291229     B
        """
        # Prepare SPC
        spc = np.array(spc)
        if spc.ndim == 1:
            spc = spc.reshape(1, -1)
        elif spc.ndim > 2:
            raise ValueError("Invalid spc is provided!")

        # Prepare wl
        if wl is None:
            wl = np.arange(spc.shape[1])
        else:
            wl = np.array(wl)
            if wl.ndim > 1:
                raise ValueError("Invalid wl is provided")

        # Parse data
        if data is None:
            data = pd.DataFrame(index=range(len(spc)), columns=None)
        if not isinstance(data, pd.DataFrame):
            data = pd.DataFrame(data)

        # Checks
        if spc.shape[1] != len(wl):
            raise ValueError(
                "length of wavelength must be equal to number of columns in spc"
            )

        if spc.shape[0] != data.shape[0]:
            raise ValueError(
                "data must have the same number of instances(rows) as spc has"
            )

        self.spc = spc
        self.wl = wl
        self.data = data

    @classmethod
    def fromfile(
        cls, path: PathLike, format: Optional[str] = None, **kwargs
    ) -> "SpectraFrame":
        path: Path = Path(path)

        # Guess the format if not provided
        if format is None:
            format = path.suffix.strip(".").lower()
            if format == "pkl":
                format = "pickle"

        # Get corresponding pandas read function
        read_func = getattr(pd, f"read_{format}", None)
        if read_func is None:
            raise ValueError(f"Unsupported file format: {format}")

        # Read the file
        df = read_func(path, **kwargs)

        if isinstance(df.columns, pd.MultiIndex):
            # One type of export where data is stored as multiindex
            # (spc, ...) -> for spectra, (data, ...) -> for data
            sf = cls(
                spc=df["spc"],
                wl=df["spc"].columns.values,
                data=df["data"],
            )
        else:
            # Another type of export where data is stored as single index
            # First `nwl` columns are wavelengths, the rest are data
            # To get `nwl` we check all numeric column names
            nwl = np.where(
                [not str(col).strip("-")[0].isdigit() for col in df.columns]
            )[0][0]
            sf = cls(
                df.iloc[:, :nwl].values,
                df.columns[:nwl],
                data=df.iloc[:, nwl:],
            )

        # Try to convert wl to float
        try:
            sf.wl = sf.wl.astype(float)
        except ValueError:
            warnings.warn(
                f"Reading {path.stem}: Could not convert wavelengths to float. "
                f"Values: {sf.wl}. "
                "Keeping them as strings."
            )

        return sf

    # ----------------------------------------------------------------------
    # Internal helpers

    def _parse_string_or_column_param(
        self, param: Union[str, pd.Series, np.ndarray, list, tuple]
    ) -> pd.Series:
        """Manage different types of method arguments

        Many methods provide flexibility in the input parameters. For example,
        a user can provide either a string with the name of a data column or
        an array-like structure with the same number of elements as the number
        of spectra. This method helps to parse and convert the input to a
        standard format.

        Parameters
        ----------
        param : Union[str, pd.Series, np.ndarray, list, tuple]
            The input parameter to be parsed

        Returns
        -------
        pd.Series
            A pandas Series with the same index as the data

        Raises
        ------
        TypeError
            If it was not possible to parse the input parameter

        Examples
        --------
        >>> sf = SpectraFrame(np.random.rand(2,5), data={"group": list("AB")})
        >>> sf._parse_string_or_column_param("group")
        0    A
        1    B
        Name: group, dtype: object
        >>> sf._parse_string_or_column_param(["C", "D"])
        0    C
        1    D
        dtype: object
        >>> sf._parse_string_or_column_param(pd.Series(["C", "D"],index=[3,4]))
        0    C
        1    D
        dtype: object
        """
        if isinstance(param, str) and (param in self.data.columns):
            return self.data[param]
        elif isinstance(param, pd.Series) and (param.shape[0] == self.nspc):
            return pd.Series(param.values, index=self.index)
        elif (
            isinstance(param, np.ndarray)
            and (param.ndim == 1)
            and (param.shape[0] == self.nspc)
        ):
            return pd.Series(param, index=self.index)
        elif isinstance(param, (list, tuple)) and (len(param) == self.nspc):
            return pd.Series(param, index=self.index)
        else:
            raise TypeError(
                "Invalid parameter. It must be either a string of a data "
                "column name or pd.Series / np.array / list / tuple of "
                "lenght equal to number of spectra. "
            )

    # ----------------------------------------------------------------------
    # Properties for a quick access

    @property
    def shape(self) -> Tuple[int, int, int]:
        """A tuple representing the dimensionality of the Spectra

        Returns
        -------
        Tuple[int, int, int]:
            A tuple of the following structure:
            1. number of spectra (i.e. number of rows)
            2. number of data columns
            3. number of wavelength points
        """
        return self.nspc, self.data.shape[1], self.nwl

    @property
    def nwl(self) -> int:
        """Number of wavelength points"""
        return len(self.wl)

    @property
    def nspc(self) -> int:
        """Number of spectra in the object"""
        return self.spc.shape[0]

    @property
    def is_equally_spaced(self) -> bool:
        """Are wavelength values equally spaced?"""
        return len(np.unique(self.wl[1:] - self.wl[:-1])) == 1

    # ----------------------------------------------------------------------
    # Index

    @property
    def index(self) -> pd.Index:
        """Row indices (same as ``self.data.index``)."""
        return self.data.index

    @index.setter
    def index(self, value: Any) -> None:
        self.data.index = value

    def set_index(self, keys, *args, **kwargs) -> "SpectraFrame":
        """Return a new SpectraFrame with a new index.

        Note: ``inplace`` is ignored to match SpectraFrame copy semantics.
        """
        kwargs.pop("inplace", None)
        new_data = self.data.set_index(keys, *args, **kwargs)
        return SpectraFrame(spc=self.spc.copy(), wl=self.wl.copy(), data=new_data)

    def reset_index(self, *args, **kwargs) -> "SpectraFrame":
        """Return a new SpectraFrame with a reset index.

        Note: ``inplace`` is ignored to match SpectraFrame copy semantics.
        """
        kwargs.pop("inplace", None)
        new_data = self.data.reset_index(*args, **kwargs)
        return SpectraFrame(spc=self.spc.copy(), wl=self.wl.copy(), data=new_data)

    # ----------------------------------------------------------------------
    # Sorting

    def sort_index(self, *args, **kwargs) -> "SpectraFrame":
        """Return a new SpectraFrame sorted by row index.

        Mirrors pandas.DataFrame.sort_index for sorting rows, but always returns
        a new SpectraFrame and keeps spectra aligned with metadata.

        Examples
        --------
        >>> sf = SpectraFrame(
        ...     [[1, 2], [3, 4]],
        ...     wl=[500, 600],
        ...     data={"group": ["B", "A"]},
        ... )
        >>> sf.index = [2, 1]
        >>> print(sf.sort_index())
           500  600 group
        1    3    4     A
        2    1    2     B
        """
        kwargs.pop("inplace", None)
        axis = kwargs.pop("axis", 0)
        if axis not in [0, "index"]:
            raise ValueError("SpectraFrame.sort_index only supports axis=0 (rows).")
        ignore_index = kwargs.pop("ignore_index", False)

        # Use stable sorting to preserve order within duplicate indices.
        kwargs.setdefault("kind", "mergesort")

        # Track original row positions to keep alignment with duplicate indices.
        sorted_data = self.data.assign(_pos=np.arange(len(self.data))).sort_index(
            *args, **kwargs
        )
        row_indexer = sorted_data["_pos"].to_numpy()
        new_spc = self.spc[row_indexer, :]
        sorted_data = sorted_data.drop(columns="_pos")

        if ignore_index:
            sorted_data = sorted_data.reset_index(drop=True)

        return SpectraFrame(spc=new_spc, wl=self.wl.copy(), data=sorted_data)

    def sort_values(self, by, *args, **kwargs) -> "SpectraFrame":
        """Return a new SpectraFrame sorted by row values.

        Mirrors pandas.DataFrame.sort_values for sorting rows, but always returns
        a new SpectraFrame and keeps spectra aligned with metadata.

        Examples
        --------
        >>> sf = SpectraFrame(
        ...     [[1, 2], [3, 4]],
        ...     wl=[500, 600],
        ...     data={"group": ["B", "A"]},
        ... )
        >>> print(sf.sort_values("group"))
           500  600 group
        1    3    4     A
        0    1    2     B
        >>> sf = SpectraFrame(
        ...     [[1, 2], [3, 4], [5, 6], [7, 8]],
        ...     wl=[500, 600],
        ...     data={
        ...         "group": ["B", "A", "B", "A"],
        ...         "score": [1, 2, 3, 4],
        ...     },
        ... )
        >>> print(sf.sort_values(["group", "score"], ascending=[True, False]))
           500  600 group  score
        3    7    8     A      4
        1    3    4     A      2
        2    5    6     B      3
        0    1    2     B      1
        """
        kwargs.pop("inplace", None)
        axis = kwargs.pop("axis", 0)
        if axis not in [0, "index"]:
            raise ValueError("SpectraFrame.sort_values only supports axis=0 (rows).")
        ignore_index = kwargs.pop("ignore_index", False)

        # Track original row positions to keep alignment with duplicate indices.
        sorted_data = self.data.assign(_pos=np.arange(len(self.data))).sort_values(
            by=by, *args, **kwargs
        )
        row_indexer = sorted_data["_pos"].to_numpy()
        new_spc = self.spc[row_indexer, :]
        sorted_data = sorted_data.drop(columns="_pos")

        if ignore_index:
            sorted_data = sorted_data.reset_index(drop=True)

        return SpectraFrame(spc=new_spc, wl=self.wl.copy(), data=sorted_data)

    def wl_sort(
        self, ascending: bool = True, kind: str = "quicksort"
    ) -> "SpectraFrame":
        """Return a new SpectraFrame with sorted wavelengths.

        The wavelength array is sorted, and columns in `spc` are reordered to
        stay aligned with the updated wavelength order.

        Examples
        --------
        >>> sf = SpectraFrame(
        ...     [[1, 2], [3, 4]],
        ...     wl=[600, 500],
        ...     data={"group": ["A", "B"]},
        ... )
        >>> print(sf.wl_sort())
           500  600 group
        0    2    1     A
        1    4    3     B
        """
        wl_order = np.argsort(self.wl, kind=kind)
        if not ascending:
            wl_order = wl_order[::-1]
        new_wl = self.wl[wl_order]
        new_spc = self.spc[:, wl_order]
        return SpectraFrame(spc=new_spc, wl=new_wl, data=self.data.copy())

    # ----------------------------------------------------------------------
    # Copying

    def copy(self) -> "SpectraFrame":
        return SpectraFrame(
            spc=self.spc.copy(), wl=self.wl.copy(), data=self.data.copy()
        )

    # ----------------------------------------------------------------------
    # Accessing data
    def _parse_getitem_tuple(self, slicer: tuple) -> tuple:
        """Parse the tuple provided in __getitem__/__setitem__ methods

        Basically, validates the tuple and formats each part of the tuple
        to be in a standard format: slice or np.array with iloc values.

        Parameters
        ----------
        slicer : tuple
            The tuple provided in __getitem__ method

        Returns
        -------
        tuple
            A tuple of three slices: row, column, and wavelength

        Raises
        ------
        ValueError
            If the provided slicer is not valid
        """
        if not (isinstance(slicer, tuple) and (len(slicer) in [3, 4])):
            raise ValueError(
                "Invalid subset value. Provide 3 values in format <row, column, wl>"
                "or 4 values in format <row, column, wl, True/False>"
            )

        use_iloc = False
        if len(slicer) == 4:
            use_iloc = bool(slicer[3])
            slicer = slicer[:3]

        rows, cols, wls = slicer

        # From labels to indices
        row_selector = _parse_getitem_single_selector(
            self.data.index, rows, iloc=use_iloc
        )
        col_selector = _parse_getitem_single_selector(
            self.data.columns, cols, iloc=use_iloc
        )
        wl_selector = _parse_getitem_single_selector(
            pd.Index(self.wl), wls, iloc=use_iloc
        )

        return row_selector, col_selector, wl_selector

    def __getitem__(self, given: Union[str, tuple]) -> Union[pd.Series, "SpectraFrame"]:
        """Get a subset of the SpectraFrame

        Provides a logic for the `[...]` operator.
        Two types of slicing are supported:
        1. Single string - returns a corresponding column from the data
        2. Tuple of three or four slicers - returns a subset of the SpectraFrame
        The latter is working similar to `hyperSpec` package in R. Basically,
        it allows to slice the data by as
        `sf[rows, cols, wls]` or `sf[rows, cols, wls, is_iloc]` where `rows`, `cols`,
        and `wls` can be either a single value, a list of values, a slice, or a boolean
        vector; and `is_iloc` is a boolean flag to indicate whether the slicing is
        done by iloc or by label (similar to `wl_index` in `hyperSpec`).

        Warning
        -------
        The slicing is behaving like in `pandas` DataFrame, so the last value
        in the slice is included in the output.

        Parameters
        ----------
        given : Union[str, tuple]
            Single string or a tuple of three slicers and an optional flag

        Returns
        -------
        Union[pd.Series, SpectraFrame]
            Eirther a single column from the data or a subset of the SpectraFrame

        Examples
        --------
        >>> # Generate a SpectraFrame
        >>> spc = np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]])
        >>> wl = np.array([400, 500, 600])
        >>> data = pd.DataFrame(
        ...     {"A": [10, 11, 12], "B": [13, 14, 15], "C": [16, 17, 18]},
        ...     index=[5, 6, 7],
        ... )
        >>> sf = SpectraFrame(spc, wl, data)
        >>> print(sf)
           400  500  600   A   B   C
        5  1.0  2.0  3.0  10  13  16
        6  4.0  5.0  6.0  11  14  17
        7  7.0  8.0  9.0  12  15  18

        >>> # Get a single column
        >>> print(sf["A"])
        5    10
        6    11
        7    12
        Name: A, dtype: int64

        >>> # Get a subset of the SpectraFrame
        >>> print(sf[:5, :, :500])
           400  500   A   B   C
        5  1.0  2.0  10  13  16

        >>> # Access by iloc indexes
        >>> print(sf[:1, :, :1, True])
           400   A   B   C
        5  1.0  10  13  16

        >>> print(sf[6:, 'B':'C', 500:])
           500  600   B   C
        6  5.0  6.0  14  17
        7  8.0  9.0  15  18

        >>> print(sf[6:, 'B':'C', [400, 600]])
           400  600   B   C
        6  4.0  6.0  14  17
        7  7.0  9.0  15  18

        >>> print(sf[:, :, 400])
           400   A   B   C
        5  1.0  10  13  16
        6  4.0  11  14  17
        7  7.0  12  15  18

        >>> print(sf[:, :, 550])
        Traceback (most recent call last):
        ValueError: Unexpected selector [550]

        >>> print(sf[:, :, 510:550])
            A   B   C
        5  10  13  16
        6  11  14  17
        7  12  15  18

        >>> print(sf[:, :, 350:450])
           400   A   B   C
        5  1.0  10  13  16
        6  4.0  11  14  17
        7  7.0  12  15  18
        """
        if isinstance(given, str):
            return self.data[given]

        row_slice, col_slice, wl_slice = self._parse_getitem_tuple(given)
        return SpectraFrame(
            spc=self.spc[row_slice, wl_slice],
            wl=self.wl[wl_slice],
            data=self.data.iloc[row_slice, col_slice],
        )

    def __setitem__(self, given: Union[str, tuple], value: Any) -> None:
        """Set values in a subset of the SpectraFrame

        Provides a logic for the `frame[<given>] = <value>` operator.
        <given> has the same format as in `__getitem__` method. The <value>
        can be either a single value or array-like structure with the same
        number of elements as the subset of the SpectraFrame.

        Warning
        -------
        Either one of wavelenght or data columns (i.e. second or third slicers)
        must be `:`. Otherwise, it is not clear where to put the value.
        Therefore the method will raise an error in such cases,
        e.g. `sf[:, "a", 400:1000] = 10`.


        Parameters
        ----------
        given : Union[str, tuple]
            Single string or a tuple of three slicers
        value : Any
            The value to be set in the subset

        Examples
        --------
        >>> # Generate a SpectraFrame
        >>> spc = np.arange(9).reshape(3, 3)
        >>> sf = SpectraFrame(spc, [400, 500, 600], {"A": [10, 11, 12]})
        >>> print(sf)
           400  500  600   A
        0    0    1    2  10
        1    3    4    5  11
        2    6    7    8  12

        >>> # Add a column
        >>> sf["B"] = [1, 2, 3]
        >>> print(sf)
           400  500  600   A  B
        0    0    1    2  10  1
        1    3    4    5  11  2
        2    6    7    8  12  3

        >>> # Edit a column
        >>> sf["B"] = [20, 21, 22]
        >>> print(sf)
           400  500  600   A   B
        0    0    1    2  10  20
        1    3    4    5  11  21
        2    6    7    8  12  22

        >>> # Set a single value
        >>> sf[0, :, 500] = 100
        >>> print(sf)
           400  500  600   A   B
        0    0  100    2  10  20
        1    3    4    5  11  21
        2    6    7    8  12  22

        >>> # Set a subset
        >>> sf[1:, :, 500:] = [[200, 201], [300, 301]]
        >>> print(sf)
           400  500  600   A   B
        0    0  100    2  10  20
        1    3  200  201  11  21
        2    6  300  301  12  22

        >>> # Set a subset with iloc
        >>> sf[:2, :, :2, True] = 0
        >>> print(sf)
           400  500  600   A   B
        0    0    0    2  10  20
        1    0    0  201  11  21
        2    6  300  301  12  22

        >>> # Invalid selector
        >>> sf[:, ["A", "B"], :500] = 0
        Traceback (most recent call last):
        ValueError: Invalid slicing...
        """
        if isinstance(given, str):
            self.data.loc[:, given] = value
            return

        row_slice, col_slice, wl_slice = self._parse_getitem_tuple(given)
        if _is_empty_slice(col_slice) and not _is_empty_slice(wl_slice):
            self.spc[row_slice, wl_slice] = value
        elif not _is_empty_slice(col_slice) and _is_empty_slice(wl_slice):
            self.data.iloc[row_slice, col_slice] = value
        else:
            raise ValueError(
                "Invalid slicing. Either data columns or "
                "wavelengths indexes must be `:`"
            )

    def __getattr__(self, name) -> pd.Series:
        if name in self.data.columns:
            return self.data[name]
        if name in ["columns"]:
            return self.data.columns
        super().__getattr__(name)

    def query(self, expr: str) -> "SpectraFrame":
        """Filter spectra using pandas DataFrame.query

        Parameters
        -----------
        expr : str
            Query expression

        Returns
        -------
        SpectraFrame
            A new SpectraFrame with the filtered data

        Examples
        --------
        >>> np.random.seed(42)
        >>> sf = SpectraFrame(np.random.rand(4, 5), data={"group": list("AABB")})
        >>> print(sf)
                  0  ...         4 group
        0  0.374540  ...  0.156019     A
        1  0.155995  ...  0.708073     A
        2  0.020584  ...  0.181825     B
        3  0.183405  ...  0.291229     B
        >>> sf.query("group == 'A'")
                  0  ...         4 group
        0  0.374540  ...  0.156019     A
        1  0.155995  ...  0.708073     A
        """
        indices = self.data.query(expr).index
        return self[indices, :, :]

    def assign(self, **kwargs) -> "SpectraFrame":
        """Assign new columns to a SpectraFrame.

        Returns a new SpectraFrame with the assigned columns.

        Parameters
        ----------
        **kwargs
            Column assignments, same as pandas DataFrame.assign()

        Returns
        -------
        SpectraFrame
            A new SpectraFrame with the assigned columns

        Examples:
        --------
        >>> np.random.seed(42)
        >>> sf = SpectraFrame(np.random.rand(4, 5), data={"group": list("AABB")})
        >>> print(sf)
                  0  ...         4 group
        0  0.374540  ...  0.156019     A
        1  0.155995  ...  0.708073     A
        2  0.020584  ...  0.181825     B
        3  0.183405  ...  0.291229     B
        >>> sf_new = sf.assign(new_col=lambda x: x.group == "A")
        >>> print(sf_new)
                  0  ...         4 group  new_col
        0  0.374540  ...  0.156019     A     True
        1  0.155995  ...  0.708073     A     True
        2  0.020584  ...  0.181825     B    False
        3  0.183405  ...  0.291229     B    False
        """
        new_sf = self.copy()
        new_sf.data = new_sf.data.assign(**kwargs)
        return new_sf

    def drop(self, columns) -> "SpectraFrame":
        """Drop specified columns from the SpectraFrame.

        Returns a new SpectraFrame with the specified columns dropped.

        Parameters
        ----------
        columns : str or list of str
            Column name(s) to drop from the data

        Returns
        -------
        SpectraFrame
            A new SpectraFrame with specified columns dropped

        Examples
        --------
        >>> np.random.seed(42)
        >>> sf = SpectraFrame(
        ...     np.random.rand(4, 5),
        ...     data={"group": list("AABB"), "type": list("XYXY")}
        ... )
        >>> print(sf)
                  0  ...         4 group type
        0  0.374540  ...  0.156019     A    X
        1  0.155995  ...  0.708073     A    Y
        2  0.020584  ...  0.181825     B    X
        3  0.183405  ...  0.291229     B    Y
        >>> sf_new = sf.drop("type")
        >>> print(sf_new)
                  0  ...         4 group
        0  0.374540  ...  0.156019     A
        1  0.155995  ...  0.708073     A
        2  0.020584  ...  0.181825     B
        3  0.183405  ...  0.291229     B
        >>> sf_new2 = sf.drop(["group", "type"])
        >>> print(sf_new2)
                  0  ...         4
        0  0.374540  ...  0.156019
        1  0.155995  ...  0.708073
        2  0.020584  ...  0.181825
        3  0.183405  ...  0.291229
        """
        new_sf = self.copy()
        new_sf.data = new_sf.data.drop(columns=columns)
        return new_sf

    # ----------------------------------------------------------------------
    # Arithmetic operations +, -, *, /, **, abs, round, ceil, etc.

    def __add__(self, other: Any) -> "SpectraFrame":
        if isinstance(other, type(self)):
            other = other.spc
        return SpectraFrame(spc=self.spc.__add__(other), wl=self.wl, data=self.data)

    def __sub__(self, other: Any) -> "SpectraFrame":
        if isinstance(other, type(self)):
            other = other.spc
        return SpectraFrame(spc=self.spc.__sub__(other), wl=self.wl, data=self.data)

    def __mul__(self, other: Any) -> "SpectraFrame":
        if isinstance(other, type(self)):
            other = other.spc
        return SpectraFrame(spc=self.spc.__mul__(other), wl=self.wl, data=self.data)

    def __truediv__(self, other: Any) -> "SpectraFrame":
        if isinstance(other, type(self)):
            other = other.spc
        return SpectraFrame(spc=self.spc.__truediv__(other), wl=self.wl, data=self.data)

    def __pow__(self, other: Any) -> "SpectraFrame":
        return SpectraFrame(spc=self.spc.__pow__(other), wl=self.wl, data=self.data)

    def __radd__(self, other: Any) -> "SpectraFrame":
        return SpectraFrame(spc=self.spc.__radd__(other), wl=self.wl, data=self.data)

    def __rsub__(self, other: Any) -> "SpectraFrame":
        return SpectraFrame(spc=self.spc.__rsub__(other), wl=self.wl, data=self.data)

    def __rmul__(self, other: Any) -> "SpectraFrame":
        return SpectraFrame(spc=self.spc.__rmul__(other), wl=self.wl, data=self.data)

    def __rtruediv__(self, other: Any) -> "SpectraFrame":
        return SpectraFrame(
            spc=self.spc.__rtruediv__(other), wl=self.wl, data=self.data
        )

    # def __iadd__(self, other: Any) -> None:
    #     if isinstance(other, type(self)):
    #         other = other.spc
    #     self.spc = self.spc.__add__(other)

    # def __isub__(self, other: Any) -> None:
    #     if isinstance(other, type(self)):
    #         other = other.spc
    #     self.spc = self.spc.__sub__(other)

    # def __imul__(self, other: Any) -> None:
    #     if isinstance(other, type(self)):
    #         other = other.spc
    #     self.spc = self.spc.__mul__(other)

    # def __itruediv__(self, other: Any) -> None:
    #     if isinstance(other, type(self)):
    #         other = other.spc
    #     self.spc = self.spc.__truediv__(other)

    def __abs__(self) -> "SpectraFrame":
        return SpectraFrame(spc=np.abs(self.spc), wl=self.wl, data=self.data)

    def __round__(self, n: int) -> "SpectraFrame":
        return SpectraFrame(spc=np.round(self.spc, n), wl=self.wl, data=self.data)

    def __floor__(self) -> "SpectraFrame":
        return SpectraFrame(spc=np.floor(self.spc), wl=self.wl, data=self.data)

    def __ceil__(self) -> "SpectraFrame":
        return SpectraFrame(spc=np.ceil(self.spc), wl=self.wl, data=self.data)

    def __trunc__(self) -> "SpectraFrame":
        return SpectraFrame(spc=np.trunc(self.spc), wl=self.wl, data=self.data)

    def __array__(self) -> np.ndarray:
        """Return spectral data when converted to numpy array

        This method is called when np.array(sf) is used on a SpectraFrame object.

        Returns
        -------
        np.ndarray
            The spectral data array (self.spc)
        """
        return self.spc

    # ----------------------------------------------------------------------
    # Wavelengths

    def wl_resample(
        self, new_wl: np.ndarray, method="interp1d", **kwargs
    ) -> "SpectraFrame":
        """Resample wavelengths, i.e. shift wavelenghts with interpolation

        Parameters
        ----------
        new_wl : np.ndarray
            New wavenumbers
        method : str, optional
            Method for interpolation. Currently only "interp1d" is supported.
            Which is using `scipy.interpolate.interp1d` function.
        kwargs : dict, optional
            Additional parameters to be passed to the interpolator function.
            See `scipy.interpolate.interp1d` docs for more details.

        Returns
        -------
        SpectraFrame
            A new SpectraFrame object with `new_wl` as wavenumbers, and
            interpolated signal values as spectral data. `*.data` part
            remains the same.

        Raises
        ------
        NotImplementedError
            Unimplemented method of interpolation.
        """
        if method == "interp1d":
            interpolator = scipy.interpolate.interp1d(x=self.wl, y=self.spc, **kwargs)
            new_spc = interpolator(new_wl)
        else:
            raise NotImplementedError("Other methods not available yet")

        return SpectraFrame(new_spc, wl=new_wl, data=self.data)

    def resample_wl(
        self, new_wl: np.ndarray, method="interp1d", **kwargs
    ) -> "SpectraFrame":
        """Resample wavelengths (deprecated name for ``wl_resample``).

        This method is kept for backward compatibility. Use ``wl_resample``.
        """
        warnings.warn(
            "resample_wl is deprecated; use wl_resample instead.",
            DeprecationWarning,
            stacklevel=2,
        )
        return self.wl_resample(new_wl, method=method, **kwargs)

    # ----------------------------------------------------------------------
    # Stats & Applys
    def _get_axis(self, axis, groupby=None) -> int:
        """Get axis value in standard format"""
        if groupby is not None:
            return 0
        if axis in [0, "index"]:
            return 0
        elif axis in [1, "columns"]:
            return 1
        else:
            raise ValueError(f"Unexpected `axis` value {axis}")

    def _get_groupby(self, groupby) -> Union[list[str], None]:
        """Format and validate groupby value"""
        if groupby is None:
            return None

        # Grouped
        if isinstance(groupby, str):
            groupby = [groupby]

        # Check the names are in the data
        for name in groupby:
            if name not in self.data.columns:
                raise ValueError(f"Column '{name}' is not presented in the data")

        return groupby

    def _apply_func(
        self,
        func: Union[str, Callable],
        *args,
        data: Optional[np.ndarray] = None,
        axis: int = 1,
        **kwargs,
    ) -> np.ndarray:
        """Apply a function alog an axis

        Dispatches calculation to `np.apply_alog_axis` (if func is callable) or
        `np.<func>` (if func is a string)

        Parameters
        ----------
        func : Union[str, Callable]
            Either a string with the name of numpy funciton, e.g "max", "mean", etc.
            Or a callable function that can be passed to `numpy.apply_along_axis`
        data : np.ndarray, optional
            To which data apply the function, by default `self.spc`
            This parameter is useful for cases when the function must be applied on
            different parts of the spctral data, e.g. when groupby is used
        axis : int, optional
            Standard axis. Same as in `numpy` or `pandas`, by default 1

        Returns
        -------
        np.ndarray
            The output array. The shape of out is identical to the shape of data, except
            along the axis dimension. This axis is removed, and replaced with new
            dimensions equal to the shape of the return value of func. So if func
            returns a scalar, the output will be eirther single row (axis=0) or
            single column (axis=1) matrix.

        Raises
        ------
        ValueError
            Function with provided name `func` was not found in `numpy`
        """
        # Check and prepare parameters
        if data is None:
            data = self.spc

        if isinstance(func, str):
            name = func
            if hasattr(np, name):
                func = getattr(np, name)
            else:
                raise ValueError(f"Could not find function {name} in `numpy`")

            res: np.ndarray = func(data, *args, axis=axis, **kwargs)
            # Functions like np.quantile behave differently than apply_alog_axis
            # Here we make the shape of the matrix to be the same
            if (res.ndim > 1) and (axis == 1):
                res = res.T
        else:
            res = np.apply_along_axis(func, axis, data, *args, **kwargs)

        # Reshape the result to keep dimenstions
        if res.ndim == 1:
            res = res.reshape((1, -1)) if axis == 0 else res.reshape((-1, 1))

        return res

    def apply(
        self,
        func: Union[str, Callable],
        *args,
        groupby: Union[str, list[str], None] = None,
        axis: int = 0,
        **kwargs,
    ) -> "SpectraFrame":
        """Apply function to the spectral data

        Parameters
        ----------
        func : Union[str, callable]
            Either a string with the name of numpy funciton, e.g "max", "mean", etc.
            Or a callable function that can be passed to `numpy.apply_along_axis`
        groupby : Union[str, list[str]], optional
            Single or list of `data` column names to use for grouping the data.
            By default None, so the function applied to the all spectral data.
        axis : int, optional
             Standard axis. Same as in `numpy` or `pandas`, by default 1 when groupby
             is not provided, and 0 when provided.

        Returns
        -------
        SpectraFrame
            Output spectral frame where
            * `out.spc` is the results of `func`
            * `out.wl` either the same (axis=0 OR axis=1 and `nwl` matches)
              or range 0..N (axis=1 and `nwl` does not match)
            * `out.data` The same if axis=1. If axis=0, either empty (no grouping)
                or represents the grouping.
        """

        # Prepare arguments
        axis = self._get_axis(axis, groupby)
        groupby = self._get_groupby(groupby)

        # Prepare default values
        new_wl = self.wl if axis == 0 else None
        new_data = self.data if axis == 1 else None

        if groupby is None:
            new_spc = self._apply_func(func, *args, axis=axis, **kwargs)
        else:
            # Prepare a dataframe for groupby aggregation
            grouped = self.to_pandas().groupby(groupby, observed=True)[self.wl]

            # Prepare list of group names as dicts {'column name': 'column value', ...}
            keys = [i for i, _ in grouped]
            groups = [dict(zip(groupby, gr)) for gr in keys]

            # Apply to each group
            spc_list = [
                self._apply_func(func, *args, data=group.values, axis=0, **kwargs)
                for _, group in grouped
            ]
            data_list = [
                pd.DataFrame({**gr, "group_index": range(spc_list[i].shape[0])})
                for i, gr in enumerate(groups)
            ]

            # Combine
            new_spc = np.concatenate(spc_list, axis=0)
            new_data = pd.concat(data_list, axis=0, ignore_index=True)

        # If the applied function returns same number of wavelenghts
        # we assume that wavelengths are the same, e.g. baseline,
        # smoothing, etc.
        if (new_wl is None) and (new_spc.shape[1] == self.nwl):
            new_wl = self.wl

        return SpectraFrame(new_spc, wl=new_wl, data=new_data)

    def area(self) -> "SpectraFrame":
        """Calculate area under the spectra"""
        return SpectraFrame(
            scipy.integrate.trapezoid(self.spc, x=self.wl, axis=1).reshape((-1, 1)),
            wl=None,
            data=self.data,
        )

    # ----------------------------------------------------------------------
    # Dispatching to numpy methods
    # TODO: It would be good to group the method declarations below

    def min(
        self, *args, groupby=None, axis=1, ignore_na=False, **kwargs
    ) -> "SpectraFrame":
        func = "min" if not ignore_na else "nanmin"
        return self.apply(func, *args, groupby=groupby, axis=axis, **kwargs)

    def max(
        self, *args, groupby=None, axis=1, ignore_na=False, **kwargs
    ) -> "SpectraFrame":
        func = "max" if not ignore_na else "nanmax"
        return self.apply(func, *args, groupby=groupby, axis=axis, **kwargs)

    def sum(
        self, *args, groupby=None, axis=1, ignore_na=False, **kwargs
    ) -> "SpectraFrame":
        func = "sum" if not ignore_na else "nansum"
        return self.apply(func, *args, groupby=groupby, axis=axis, **kwargs)

    def mean(
        self, *args, groupby=None, axis=1, ignore_na=False, **kwargs
    ) -> "SpectraFrame":
        func = "mean" if not ignore_na else "nanmean"
        return self.apply(func, *args, groupby=groupby, axis=axis, **kwargs)

    def std(
        self, *args, groupby=None, axis=1, ignore_na=False, **kwargs
    ) -> "SpectraFrame":
        func = "std" if not ignore_na else "nanstd"
        return self.apply(func, *args, groupby=groupby, axis=axis, **kwargs)

    def median(
        self, *args, groupby=None, axis=1, ignore_na=False, **kwargs
    ) -> "SpectraFrame":
        func = "median" if not ignore_na else "nanmedian"
        return self.apply(func, *args, groupby=groupby, axis=axis, **kwargs)

    def mad(
        self, *args, groupby=None, axis=1, ignore_na=False, **kwargs
    ) -> "SpectraFrame":
        if ignore_na:
            median = lambda x: np.nanmedian(x, *args, **kwargs)
        else:
            median = lambda x: np.median(x, *args, **kwargs)
        return self.apply(
            lambda x: median(np.absolute(x - median(x))), groupby=groupby, axis=axis
        )

    def quantile(
        self, q, *args, groupby=None, axis=1, ignore_na=False, **kwargs
    ) -> "SpectraFrame":
        func = "quantile" if not ignore_na else "nanquantile"
        return self.apply(func, q, *args, groupby=groupby, axis=axis, **kwargs)

    # ----------------------------------------------------------------------
    # Multidimensional rearrangements via einops
    def _fill_missing_grid(
        self,
        columns: list[str],
        fill_value: Optional[float] = None,
        **grid_values: dict[str, list[Any]],
    ) -> "SpectraFrame":
        """Fill missing coordinate combinations in a ragged grid.

        During preprocessing, it is common to exclude some spectra.
        Consequently, some coordinate combinations may be missing in the data.
        (e.g. excluding some 'x, y' pixels from a spectral image).
        Downstream, such data can not be correctly reshaped without additional
        handling.  This method helps to fill the missing combinations
        with a specified `fill_value`, resulting in a complete grid.

        Parameters
        ----------
        columns : list of str
            List of data column names to define the grid axes. Must be present in
            `self.data`.
        fill_value : Optional[float], optional
            Value to use for filling missing spectra. If None (default), missing
            spectra are filled with NaNs (``np.nan``).
        **grid_values : dict of str to list of Any
            Optional custom grid values for specific columns
            (i.e. 'colname': [grid_value1, grid_value2, ...]). If not provided,
            the unique values from `self['colname']` are used. This is to provide
            control over the grid points. It may be used to include values not present
            in the data (e.g., pad images with additional pixels); or, the opposite,
            to exclude unwanted values present in the data.

        Returns
        -------
        SpectraFrame
            A new SpectraFrame with full grid of coordinate combinations.
            Missing combinations are filled with `fill_value`.
            The order of spectra is sorted by the order of the grid combinations.
            Only the grid axis columns in ``columns`` are retained in ``.data``.

        Raises
        ------
        ValueError
            If any of the specified columns are not present in `self.data`, or
            if custom `grid_values` are provided for columns not in `columns`.
        ValueError
            If duplicate coordinate combinations are present for the requested grid
            axes in ``columns``.

        Notes
        -----
        Padding may promote the dtype of ``spc`` to accommodate ``fill_value``.
        In particular, the default ``fill_value=None`` pads with ``np.nan`` and
        therefore promotes integer spectra to floating point.

        Examples
        --------
        >>> # Create a SpectraFrame with missing combinations
        >>> sf = SpectraFrame(
        ...     spc=np.array([[1, 2], [3, 4], [5, 6]]),
        ...     wl=np.array([400, 500]),
        ...     data=pd.DataFrame({
        ...         "x": [0, 0, 1],
        ...         "y": [0, 1, 0]
        ...     })
        ... )
        >>> print(sf)
           400  500  x  y
        0    1    2  0  0
        1    3    4  0  1
        2    5    6  1  0

        >>> # Fill missing grid combinations for 'x' and 'y'
        >>> filled_sf = sf._fill_missing_grid(columns=['x', 'y'])
        >>> print(filled_sf)
           400  500  x  y
        0  1.0  2.0  0  0
        1  3.0  4.0  0  1
        2  5.0  6.0  1  0
        3  NaN  NaN  1  1

        >>> # Fill missing grid with custom grid values and fill value
        >>> filled_sf_custom = sf._fill_missing_grid(
        ...     columns=['x', 'y'],
        ...     fill_value=0,
        ...     x=[2, 1, 0],
        ...     y=[0, 1, 2]
        ... )
        >>> print(filled_sf_custom)
           400  500  x  y
        0    0    0  2  0
        1    0    0  2  1
        2    0    0  2  2
        3    5    6  1  0
        4    0    0  1  1
        5    0    0  1  2
        6    1    2  0  0
        7    3    4  0  1
        8    0    0  0  2
        """
        # Validate: no grid axes requested (nothing to fill/sort).
        if not columns:
            raise ValueError("No grid axes specified in `columns`.")

        # Validate: columns is iterable
        try:
            columns = list(columns)
        except TypeError:
            raise ValueError("`columns` must be iterable")

        # Validate: all columns are in data.columns
        extra_columns = set(columns) - set(self.data.columns)
        if extra_columns:
            raise ValueError(f"Columns not present in data: {sorted(extra_columns)!r}")

        # Validate: all custom grid_values are in columns
        missing_columns = set(grid_values.keys()) - set(columns)
        if missing_columns:
            raise ValueError(
                "Custom grid_values provided for columns not present in data: "
                f"{sorted(missing_columns)!r}"
            )

        # Validate: all grid_values are iterable
        try:
            for col, values in grid_values.items():
                grid_values[col] = list(values)
        except TypeError:
            raise ValueError("All `grid_values` must be iterable")

        # Validate: no duplicate coordinate tuples for the requested grid axes.
        # Duplicates make gridding ambiguous (more than one spectrum per cell).
        coord_df = self.data.loc[:, columns]
        if coord_df.duplicated(subset=columns).any():
            counts = coord_df.groupby(columns, dropna=False).size()
            dup_counts = counts[counts > 1].sort_values(ascending=False)
            examples = list(dup_counts.head(5).index)
            raise ValueError(
                "Duplicate coordinate combinations found for grid axes "
                f"{columns!r}. Examples (up to 5): {examples!r}. "
                "Include additional axis columns in the pattern or aggregate first."
            )

        # Prepare grid_values for each column
        out_grid_values = {col: _sorted_unique(self.data[col]) for col in columns}
        out_grid_values.update(grid_values)

        # Create full grid of coordinate combinations
        grids = np.meshgrid(*[out_grid_values[col] for col in columns], indexing="ij")
        grid_tuples = list(zip(*(g.ravel() for g in grids)))
        grid_df = pd.DataFrame(grid_tuples, columns=columns)

        # Merge with existing data to find missing combinations
        merged = pd.merge(
            grid_df,
            self.data.loc[:, columns].assign(_orig_index=self.data.index),
            on=columns,
            how="left",
            indicator=True,
        )

        # Prepare SpectraFrame with missing combinations
        missing_mask = merged["_merge"] == "left_only"
        missing_sf = None
        if missing_mask.any():
            fill_value = np.nan if fill_value is None else fill_value
            out_dtype = np.result_type(self.spc.dtype, fill_value)
            # Create new SpectraFrame with missing entries filled
            missing_data = merged.loc[missing_mask, columns]
            missing_spc = np.full(
                (missing_mask.sum(), self.nwl), fill_value, dtype=out_dtype
            )
            missing_sf = SpectraFrame(spc=missing_spc, wl=self.wl, data=missing_data)

        # Prepare existing SpectraFrame (might be a subset of self)
        existing_mask = merged["_merge"] == "both"
        n_existing = existing_mask.sum()
        if n_existing == 0:
            raise ValueError("No existing spectra found to fill the grid.")

        # Get existing spectra in the order of the merged grid
        existing_sf = self[merged._orig_index[existing_mask], columns, :]
        existing_sf.index = merged.index[existing_mask].values

        # Combine existing and missing
        if missing_sf is None:
            return existing_sf
        return SpectraFrame(
            spc=np.vstack([existing_sf.spc, missing_sf.spc]),
            wl=self.wl,
            data=pd.concat([existing_sf.data, missing_sf.data], ignore_index=False),
        ).sort_index()

    def _get_einops_rest_column(self, kept_columns: list[str]) -> pd.Series:
        """Encode remaining metadata columns as a deterministic integer axis.

        This is used for einops reductions when the output pattern omits some
        metadata columns; those omitted columns are collapsed into a single
        intermediate axis ("rest") prior to reduction.
        """
        rest_columns = [col for col in self.data.columns if col not in kept_columns]

        # Pick a non-colliding column name for the intermediate rest ID.
        rest_col = "_einops_rest"
        while rest_col in self.data.columns:
            rest_col = f"{rest_col}_"

        if not rest_columns:
            rest_codes = np.zeros(self.nspc, dtype=int)
        else:
            rest_tuples = list(
                self.data.loc[:, rest_columns].itertuples(index=False, name=None)
            )
            rest_codes = pd.Categorical(rest_tuples).codes

        return pd.Series(
            rest_codes,
            name=rest_col,
            index=self.data.index,
            dtype=pd.CategoricalDtype(ordered=True),
        )

    def _prepare_for_einops(
        self,
        reduction: str,
        pattern: str,
        fill_value: Optional[float] = None,
        **grid_values: dict[str, list[Any]],
    ) -> tuple[np.ndarray, str, dict[str, int]]:
        """Prepare data for einops rearrangement/reduction.

        Handles common part of using einops rearrange/reduce on SpectraFrame data.
        * Validates the pattern.
        * Sorts the data by the specified axes (so that reshaping is consistent).
        * Fills missing grid entries if requested.
        * Prepares the einops pattern and dict with dimension sizes.

        Parameters
        ----------
        reduction : str
            Type of einops operation: "rearrange" or "reduce".
        pattern : str
            Einops-style output pattern (only the right side!). Must include 'wl' for
            rearrange. For example, if the rearrangement we want is
            "(y x) wl -> "y x wl" (i.e., convert to a hyperspectral cube),
            we pass only `pattern="y x wl"`, as the left side is inferred from the data.
        fill_value : Optional[float]
            Value to fill missing grid entries. If None (default), missing entries are
            filled with NaNs (``np.nan``).
        **grid_values: dict[str, list[Any]],
            Optional list of specific grid values for custom grids, e.g. padding images.

        Returns
        -------
        tuple[np.ndarray, str, dict[str, int]]
            A tuple with:
            * Sorted spectral data array ready for einops.
            * Einops pattern string including both sides.
            * Dict with sizes for each axis/dimension.

        Raises
        ------
        ValueError
            If the pattern is invalid or incompatible with the data.
        NotImplementedError
            If the pattern includes unsupported features such as ellipsis (...).

        Examples
        --------
        >>> np.random.seed(42)
        >>> sf = SpectraFrame(
        ...     spc=np.arange(4*5).reshape((4, 5)),
        ...     wl=np.array([400, 500, 600, 700, 800]),
        ...     data=pd.DataFrame({
        ...         "y": [1, 0, 1, 0],
        ...         "x": [0, 0, 1, 1]
        ...     })
        ... )
        >>> sorted_spc, einops_pattern, sizes = sf._prepare_for_einops(
        ...     reduction="rearrange",
        ...     pattern="y x wl"
        ... )
        >>> print(einops_pattern)
        (y x) wl -> y x wl
        >>> print(sizes)
        {'y': 2, 'x': 2, 'wl': 5}
        >>> print(sorted_spc)
        [[ 5  6  7  8  9]
         [15 16 17 18 19]
         [ 0  1  2  3  4]
         [10 11 12 13 14]]
        """
        names = _einops_pattern_to_names(pattern)

        # Validate: non-empty pattern
        if not names:
            raise ValueError("Pattern is empty or invalid")
        # Validate: 'wl' must be present for rearrange
        if reduction == "rearrange" and "wl" not in names:
            raise ValueError("Pattern must include 'wl'")
        # Validate: no duplicates
        if len(names) != len(set(names)):
            raise ValueError("Pattern contains duplicate axis names")

        # Extract metadata axes (everything except wavelength).
        axis_names = [a for a in names if a != "wl"]

        # TODO: Does not support elipsis (...) yet
        if "..." in axis_names:
            raise NotImplementedError(
                "Ellipsis (...) in einops patterns is not supported yet."
            )

        # Validate: all names (except 'wl') are in self.data.columns
        extra_names = set(names) - set(self.data.columns) - {"wl"}
        if extra_names:
            raise ValueError(
                "Pattern references axes not present in sf.data columns: "
                f"{sorted(extra_names)!r}"
            )

        # Prepare einops pattern
        if reduction == "rearrange":
            left_pattern = f"({' '.join(axis_names)}) wl"
        elif reduction == "reduce":
            axis_part = " ".join(axis_names)
            left_pattern = f"({axis_part} rest) wl" if axis_part else "(rest) wl"
        else:
            raise ValueError(f"Unknown reduction type: {reduction!r}")
        einops_pattern = f"{left_pattern} -> {pattern}"

        # Prepare the source frame and axis sizes for einops.
        einops_sf = self

        # For reductions, add "rest" axis for omitted metadata columns
        # Define "rest" as unique combinations of the omitted columns.
        rest_col = ""
        if reduction == "reduce":
            rest = self._get_einops_rest_column(axis_names)
            rest_col = rest.name
            einops_sf = einops_sf.assign(**{rest_col: rest})
            axis_names.append(rest_col)

        # Fill the grid and sort accordingly.
        # This has to be done regardeless of fill_value,
        # because grid_values may specify additional grid points.
        einops_sf = einops_sf._fill_missing_grid(
            axis_names, fill_value=fill_value, **grid_values
        )

        # Get sizes for each axis/dimension
        sizes = {
            col: einops_sf.data[col].nunique() for col in axis_names if col != rest_col
        }
        # sizes.update(extra_sizes)
        if "wl" in names:
            sizes["wl"] = self.nwl

        return einops_sf.spc, einops_pattern, sizes

    def rearrange(
        self,
        pattern: str,
        fill_value: Optional[float] = None,
        **grid_values: dict[str, list[Any]],
    ) -> np.ndarray:
        """Rearrange spectra into a dense multidimensional tensor via einops patterns.

        This is intended for hyperspectral images and other gridded measurements where
        sample coordinates are stored as columns in ``sf.data`` (e.g. ``y``, ``x``,
        ``z``, ``time``, ``batch``) and wavelengths are stored in ``sf.wl``. One
        common use case is reshaping unfolded 2D spectra data matrix into a
        hyperspectral cube with shape ``(y, x, wl)`` or ``(batch, y, x, wl)``.

        Parameters
        ----------
        pattern : str
            Einops-style *output* pattern. Must include ``wl``, e.g.
            ``"batch y x wl"`` or ``"(batch y) x wl"``.
        fill_value : Optional[float], optional
            Fill missing coordinate combinations (ragged grids) with this value. If
            None (default), missing entries are filled with NaNs (``np.nan``).
        **grid_values: dict[str, list[Any]]
            Optional grid specifications. For each axis an explicit ordered
            list of axis values (e.g. ``x=[0, 1, 2, 3]``).

        Returns
        -------
        np.ndarray
            A dense tensor matching the requested pattern.

        Raises
        ------
        ValueError
            If the pattern is invalid or incompatible with the data.
        NotImplementedError
            If the pattern includes unsupported features such as ellipsis (...).

        Notes
        -----
        If padding is applied, the output dtype may be promoted to accommodate
        ``fill_value`` (e.g. integer spectra padded with ``np.nan`` become floats).

        Examples
        --------
        >>> np.random.seed(42)
        >>> sf = SpectraFrame(
        ...     spc=np.arange(3*5).reshape((3, 5)),
        ...     wl=np.array([400, 500, 600, 700, 800]),
        ...     data=pd.DataFrame({
        ...         "y": [1, 0, 0],
        ...         "x": [0, 0, 1]
        ...     })
        ... )
        >>> print(sf)
           400  500  600  700  800  y  x
        0    0    1    2    3    4  1  0
        1    5    6    7    8    9  0  0
        2   10   11   12   13   14  0  1

        >>> cube = sf.rearrange(pattern="y x wl", fill_value=np.nan)
        >>> print(cube.shape)
        (2, 2, 5)
        >>> print(cube[:,:,0]) # wl=400 slice
        [[ 5. 10.]
         [ 0. nan]]
        """
        einops = _require_einops()
        sorted_spc, einops_pattern, sizes = self._prepare_for_einops(
            "rearrange", pattern, fill_value=fill_value, **grid_values
        )

        # Validate: total size matches number of spectra
        sizes.pop("wl", None)  # wl is not counted in total size
        if np.prod(list(sizes.values())) != len(sorted_spc):
            raise ValueError(
                "Cannot reshape: number of spectra does not match the implied grid "
                "size. Ensure coordinate tuples are unique and that the requested "
                "grid axes match the available metadata."
            )

        # Rest of validation and rearrangement is done on the einops side
        return einops.rearrange(sorted_spc, einops_pattern, **sizes)

    def reduce(
        self,
        reducer: Union[str, Callable],
        pattern: str,
        *,
        ignore_na: bool = False,
        fill_value: Optional[float] = None,
        **grid_values: Any,
    ) -> Union[np.ndarray, "SpectraFrame"]:
        """Reduce spectra along axes implied by an einops-style output pattern.

        The pattern uses metadata axes from ``sf.data`` and may include ``wl`` (to keep
        spectra) or omit it (to reduce over wavelengths). When the output is 2D with
        ``wl`` as the last axis the result is equivalent to
        ``SpectraFrame.apply(reducer, groupby=...).spc``.

        Parameters
        ----------
        reducer : Union[str, Callable]
            Reduction to apply. Supported strings: ``"mean"``, ``"sum"``, ``"min"``,
            ``"max"``, ``"std"``, ``"median"``. Callables are also supported.
        pattern : str
            Einops-style *output* pattern.
        ignore_na : bool, optional
            Use NaN-aware reductions for supported string reducers. Defaults to False.
        fill_value : Optional[float], optional
            When returning an array, fill missing coordinate combinations in reshaping.
            If None (default), missing entries are filled with NaNs (``np.nan``).
        **grid_values: dict[str, list[Any]]
            Optional grid specifications. For each axis an explicit ordered
            list of axis values (e.g. ``x=[0, 1, 2, 3]``).
            NOTE: At the moment, the order of values is not preserved in the output
            tensor, the values are always sorted. This may change in future releases.

        Returns
        -------
        np.ndarray
            Reduced array matching the requested pattern.

        Notes
        -----
        If padding is applied, the output dtype may be promoted to accommodate
        ``fill_value`` (e.g. integer spectra padded with ``np.nan`` become floats).

        Examples
        --------
        >>> np.random.seed(42)
        >>> sf = SpectraFrame(
        ...     spc=np.arange(6*5).reshape((6, 5)),
        ...     wl=np.array([400, 500, 600, 700, 800]),
        ...     data=pd.DataFrame({
        ...         "y": [1, 0, 1, 0, 1, 1],
        ...         "x": [0, 0, 1, 1, 0, 1],
        ...         "batch": [0, 0, 0, 1, 1, 1]
        ...     })
        ... )
        >>> print(sf)
           400  500  600  700  800  y  x  batch
        0    0    1    2    3    4  1  0      0
        1    5    6    7    8    9  0  0      0
        2   10   11   12   13   14  1  1      0
        3   15   16   17   18   19  0  1      1
        4   20   21   22   23   24  1  0      1
        5   25   26   27   28   29  1  1      1

        >>> # Reduce to mean spectra per pixel (y, x)
        >>> reduced = sf.reduce(reducer="mean", pattern="y x wl", fill_value=np.nan)
        >>> print(reduced.shape)
        (2, 2, 5)
        >>> print(reduced[:,:,0]) # wl=400 slice
        [[ nan  nan]
         [10.  17.5]]

        >>> # Ignore NaNs and reduce to sum spectra per batch
        >>> reduced = sf.reduce(
        ...     reducer="mean",
        ...     pattern="y x wl",
        ...     fill_value=np.nan,
        ...     ignore_na=True
        ... )
        >>> print(reduced[:,:,0]) # wl=400 slice
        [[ 5.  15. ]
         [10.  17.5]]
        """

        einops = _require_einops()
        sorted_spc, einops_pattern, sizes = self._prepare_for_einops(
            "reduce", pattern, fill_value=fill_value, **grid_values
        )

        # Parse reducer
        if isinstance(reducer, str):
            reducer_key = reducer.lower()
            reducer_prefix = "nan" if ignore_na else ""
            numpy_func_name = f"{reducer_prefix}{reducer_key}"
            reducer_names = ["mean", "sum", "min", "max", "std", "median"]
            if not hasattr(np, numpy_func_name) or reducer_key not in reducer_names:
                raise ValueError(
                    "Unsupported reducer. Expected one of "
                    "['mean', 'sum', 'min', 'max', 'std', 'median'], "
                    f"got {reducer!r}."
                )
            func: Callable = getattr(np, numpy_func_name)
        elif callable(reducer):
            func: Callable = reducer
        else:
            raise ValueError("Reducer must be either a string or a callable.")

        return einops.reduce(
            sorted_spc,
            einops_pattern,
            func,
            **sizes,
        )

    # ----------------------------------------------------------------------
    # Manipulations
    def normalize(
        self,
        method: str,
        ignore_na: bool = True,
        peak_range: Optional[tuple[float, float]] = None,
        **kwargs,
    ) -> "SpectraFrame":
        """Dispatcher for spectra normalization

        Parameters
        ----------
        method : str
            Method of normaliztion. Available options: '01', 'area', 'vector', 'mean',
            'peak' (normalize by peak value in the given range). By default, peak value
            is approximated by the maximum value in the given range. To use a different
            method, use the `**kwargs` to pass to `around_max_peak_fit` function.
        ignore_na : bool, optional
            Ignore NaN values in the data, by default True
        peak_range : tuple[int], optional
            Range of wavelength/wavenumber to use for peak normalization.
            If None (default), the whole range is used.

        Returns
        -------
        SpectraFrame
            A new SpectraFrame with normalized values

        Raises
        ------
        NotImplementedError
            Unknown or not implemented methods, e.g. peak normalization
        """
        spc = self.copy()
        if method == "01":
            spc = spc - spc.min(axis=1, ignore_na=ignore_na)
            spc = spc / spc.max(axis=1, ignore_na=ignore_na)
        elif method == "area":
            spc = spc / spc.area()
        elif method == "peak":
            if peak_range is None:
                peak_range: tuple[float, float] = (self.wl[0], self.wl[-1])

            peak_intensities = around_max_peak_fit(
                x=self[:, :, peak_range[0] : peak_range[1]].wl,
                y=self[:, :, peak_range[0] : peak_range[1]].spc,
                **kwargs,
            )
            spc = spc / peak_intensities.y_max.values.reshape((spc.nspc, -1))
        elif method == "vector":
            if ignore_na:
                spc = spc / np.sqrt(
                    np.nansum(np.power(spc.spc, 2), axis=1, keepdims=True)
                )
            else:
                spc = spc / np.sqrt(np.sum(np.power(spc.spc, 2), axis=1, keepdims=True))
        elif method == "mean":
            spc = spc / spc.mean(axis=1, ignore_na=ignore_na)
        else:
            raise ValueError("Unknown normalization method")

        return spc

    def smooth(self, method: str = "savgol", **kwargs) -> "SpectraFrame":
        """Dispatcher for spectra smoothing

        Parameters
        ----------
        method : str, optional
            Method of smoothing. Currently, only "savgol" is avalialbe
        kwargs : dict
            Additional parameters to pass to the smoothing method

        Returns
        -------
        SpectraFrame
            A new frame with smoothed values

        Raises
        ------
        NotImplementedError
            Unknown or unimplemented smoothing method
        """
        spc = self.copy()
        if method == "savgol":
            spc.spc = scipy.signal.savgol_filter(spc.spc, **kwargs)
        else:
            raise NotImplementedError("Method is not implemented yet")

        return spc

    def baseline(self, method: str, **kwargs) -> "SpectraFrame":
        """Dispatcher for spectra baseline estimation

        Dispatches baseline correction to the corresponding method
        in `pybaselines` package.
        In addition, "rubberband" method is available.

        Parameters
        ----------
        method : str
            A name of the method in `pybaselines` package (e.g. "airpls", "snip"),
            or "rubberband"
        kwargs: dict
            Additional parameters to pass to the baseline correction method

        Returns
        -------
        SpectraFrame
            A frame of estimated baselines

        Raises
        ------
        ValueError
            Unknown baseline method provided
        """
        baseline_fitter = pybaselines.Baseline(x_data=self.wl)
        if hasattr(baseline_fitter, method):
            baseline_method = getattr(baseline_fitter, method)
            baseline_func = lambda y: baseline_method(y, **kwargs)[0]
        elif method == "rubberband":
            baseline_func = lambda y: rubberband(self.wl, y, **kwargs)
        else:
            raise ValueError(
                "Unknown method. Method must be either "
                "from `pybaselines` or 'rubberband'"
            )
        return self.apply(baseline_func, axis=1)

    def sbaseline(self, method: str, **kwargs) -> "SpectraFrame":
        """Subtract baseline from the spectra

        Same as `.baseline()`, but returns a new frame with subtracted baseline.
        A shortcut for `SpectraFrame - SpectraFrame.baseline(...)`, allowing
        to chain methods, e.g. `sf.smooth().sbaseline("snip").normalize()`.
        """
        return self - self.baseline(method, **kwargs).spc

    # ----------------------------------------------------------------------
    # Format conversion

    def to_pandas(self, multiindex=False, string_names=False) -> pd.DataFrame:
        """Convert to a pandas DataFrame

        Parameters
        ----------
        multiindex : bool, optional
            Adds an index level to columns separating spectral data (`spc`) from
            meta data (`data`), by default False

        Returns
        -------
        pd.DataFrame
            Dataframe where spectral data is combined with meta data.
            Wavelengths are used as column names for spectral data part.
        """
        df = pd.DataFrame(self.spc, columns=self.wl, index=self.data.index)
        if not self.data.empty:
            df = pd.concat([df, self.data], axis=1)

        if string_names:
            df.columns = df.columns.map(str)

        if multiindex:
            df.columns = pd.MultiIndex.from_tuples(
                [("spc", wl) for wl in df.columns[: self.nwl]]
                + [("data", col) for col in df.columns[self.nwl :]]
            )

        return df

    # ----------------------------------------------------------------------
    # Misc.
    def sample(self, n: int, replace: bool = False) -> "SpectraFrame":
        indx = np.random.choice(self.nspc, size=n, replace=replace)
        return self[np.sort(indx), :, :, True]

    def __sizeof__(self):
        """Estimate the total memory usage"""
        return self.spc.__sizeof__() + self.data.__sizeof__() + self.wl.__sizeof__()

    # ----------------------------------------------------------------------
    # Plotting
    def _parse_string_or_vector_param(self, param: Union[str, ArrayLike]) -> pd.Series:
        if isinstance(param, str) and (param == "index"):
            return pd.Series(self.data.index)

        if isinstance(param, str) and (param in self.data.columns):
            return self.data[param]

        if len(param) == self.nspc:
            return pd.Series(param, index=self.data.index)

        raise TypeError(
            "Invalid parameter. It must be either 'index' or data column name, or "
            "array-like (i.e. np.array, list) of lenght equal to number of spectra."
        )

    def _prepare_plot_param(self, param: Union[None, str, ArrayLike]) -> pd.Series:
        if param is None:
            param = pd.Series(
                ["dummy"] * self.nspc, index=self.data.index, dtype="category"
            )
        else:
            param = self._parse_string_or_vector_param(param)

        param = (
            param.astype("category")
            .cat.add_categories("NA")
            .fillna("NA")
            .cat.remove_unused_categories()
        )

        return param

    def plot(
        self,
        rows=None,
        columns=None,
        colors=None,
        palette: Union[list[str], str, None] = None,
        fig=None,
        **kwargs: Any,
    ):
        # Split **kwargs
        # TODO: Either add different kw params like https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.subplots.html
        # or infer from the name of kwarg where to put it.
        show_legend = kwargs.get("legend", colors is not None)
        sharex = kwargs.get("sharex", True)
        sharey = kwargs.get("sharey", True)

        # Convert to series all 'string or vector' params
        rows_series = self._prepare_plot_param(rows)
        cols_series = self._prepare_plot_param(columns)
        colorby_series = self._prepare_plot_param(colors)

        nrows = len(rows_series.cat.categories)
        ncols = len(cols_series.cat.categories)
        ncolors = len(colorby_series.cat.categories)

        # Prepare colors
        if palette is None:
            palette = plt.rcParams["axes.prop_cycle"].by_key()["color"]
            if ncolors > len(palette):
                palette = "viridis"

        if isinstance(palette, str):
            palette = [
                rgb2hex(plt.get_cmap(palette, ncolors)(i)) for i in range(ncolors)
            ]
        assert isinstance(palette, list)
        cmap = dict(zip(colorby_series.cat.categories, palette[:ncolors]))
        cmap.update({"NA": "gray"})
        colors_series = colorby_series.cat.rename_categories(cmap)

        # Get the figure and the axes for plot
        if fig is None:
            fig, axs = plt.subplots(
                nrows,
                ncols,
                squeeze=False,
                sharex=sharex,
                sharey=sharey,
                layout="tight",
            )
        else:
            axs = np.array(fig.get_axes()).reshape((nrows, ncols))

        # Prepare legend lines if needed
        legend_lines = [
            Line2D([0], [0], color=c, lw=4) for c in colors_series.cat.categories
        ]

        # For each combination of row and column categories
        for i, vrow in enumerate(rows_series.cat.categories):
            for j, vcol in enumerate(cols_series.cat.categories):
                # Filter all spectra related to the current subplot
                rowfilter = np.array(rows_series == vrow) & np.array(
                    cols_series == vcol
                )
                if np.any(rowfilter):
                    subdf = pd.DataFrame(self.spc[rowfilter, :], columns=self.wl)
                    subdf.T.plot(
                        kind="line",
                        ax=axs[i, j],
                        color=colors_series[rowfilter],
                        **kwargs,
                    )

                # Add legend if needed
                if show_legend:
                    axs[i, j].legend(legend_lines, colorby_series.cat.categories)
                else:
                    axs[i, j].legend().set_visible(False)

                # For the first rows and columns set titles
                if (i == 0) and (columns is not None):
                    axs[i, j].set_title(str(vcol))
                if (j == 0) and (rows is not None):
                    axs[i, j].set_ylabel(str(vrow))

        return fig, axs

    # ----------------------------------------------------------------------
    def _to_print_dataframe(self) -> pd.DataFrame:
        # Get value of pandas display.max_columns option
        max_columns = pd.options.display.max_columns
        if max_columns is None:
            max_columns = float("inf")
        if max_columns == 0:
            max_columns = 20  # Default value if option is set to 0

        max_rows = pd.options.display.max_rows
        if max_rows is None:
            max_rows = float("inf")
        if max_rows == 0:
            max_rows = 60  # Default value if option is set to 0

        if sum(self.shape[1:]) > max_columns:
            # TODO: improve trancated view
            print_df = self[:max_rows, :, [0, -1], True].to_pandas()
            print_df.insert(loc=1, column="...", value="...")
        else:
            print_df = self[:max_rows, :, :, True].to_pandas()
        return print_df

    def __str__(self) -> str:
        return self._to_print_dataframe().__str__()

    def __repr__(self) -> str:
        return self._to_print_dataframe().__repr__()

Attributes

shape: Tuple[int, int, int] property

A tuple representing the dimensionality of the Spectra

Returns:

Type Description
Tuple[int, int, int]:

A tuple of the following structure: 1. number of spectra (i.e. number of rows) 2. number of data columns 3. number of wavelength points

nwl: int property

Number of wavelength points

nspc: int property

Number of spectra in the object

is_equally_spaced: bool property

Are wavelength values equally spaced?

index: pd.Index property writable

Row indices (same as self.data.index).

Functions

__init__(spc, wl=None, data=None)

Create a new SpectraFrame object

Parameters:

Name Type Description Default
spc ArrayLike

Spectral data. A 2D array where each row represents a spectrum.

required
wl Optional[ArrayLike]

Spectral coordinates, i.e. wavelengths, wavenumbers, etc. If None, then the range 0..N is used, by default None.

None
data Optional[DataFrame]

Additional meta-data, by default None

None

Raises:

Type Description
ValueError

If the provided data or wl is not valid (i.e. wrong shape, etc.)

ValueError

If shapes do not match (e.g. number of rows in spc and data)

Examples:

>>> np.random.seed(42)
>>> sf = SpectraFrame(
...     np.random.rand(4,5),
...     wl=np.linspace(600,660,5),
...     data={"group": list("AABB")}
... )
>>> print(sf)
      600.0     615.0     630.0     645.0     660.0 group
0  0.374540  0.950714  0.731994  0.598658  0.156019     A
1  0.155995  0.058084  0.866176  0.601115  0.708073     A
2  0.020584  0.969910  0.832443  0.212339  0.181825     B
3  0.183405  0.304242  0.524756  0.431945  0.291229     B
Source code in pyspc/spectra.py
def __init__(  # noqa: C901
    self,
    spc: ArrayLike,
    wl: Optional[ArrayLike] = None,
    data: Union[pd.DataFrame, pd.Series, dict] = None,
) -> None:
    """Create a new SpectraFrame object

    Parameters
    ----------
    spc : ArrayLike
        Spectral data. A 2D array where each row represents a spectrum.
    wl : Optional[ArrayLike], optional
        Spectral coordinates, i.e. wavelengths, wavenumbers, etc.
        If None, then the range 0..N is used, by default None.
    data : Optional[pd.DataFrame], optional
        Additional meta-data, by default None

    Raises
    ------
    ValueError
        If the provided data or wl is not valid (i.e. wrong shape, etc.)
    ValueError
        If shapes do not match (e.g. number of rows in spc and data)

    Examples
    --------
    >>> np.random.seed(42)
    >>> sf = SpectraFrame(
    ...     np.random.rand(4,5),
    ...     wl=np.linspace(600,660,5),
    ...     data={"group": list("AABB")}
    ... )
    >>> print(sf)
          600.0     615.0     630.0     645.0     660.0 group
    0  0.374540  0.950714  0.731994  0.598658  0.156019     A
    1  0.155995  0.058084  0.866176  0.601115  0.708073     A
    2  0.020584  0.969910  0.832443  0.212339  0.181825     B
    3  0.183405  0.304242  0.524756  0.431945  0.291229     B
    """
    # Prepare SPC
    spc = np.array(spc)
    if spc.ndim == 1:
        spc = spc.reshape(1, -1)
    elif spc.ndim > 2:
        raise ValueError("Invalid spc is provided!")

    # Prepare wl
    if wl is None:
        wl = np.arange(spc.shape[1])
    else:
        wl = np.array(wl)
        if wl.ndim > 1:
            raise ValueError("Invalid wl is provided")

    # Parse data
    if data is None:
        data = pd.DataFrame(index=range(len(spc)), columns=None)
    if not isinstance(data, pd.DataFrame):
        data = pd.DataFrame(data)

    # Checks
    if spc.shape[1] != len(wl):
        raise ValueError(
            "length of wavelength must be equal to number of columns in spc"
        )

    if spc.shape[0] != data.shape[0]:
        raise ValueError(
            "data must have the same number of instances(rows) as spc has"
        )

    self.spc = spc
    self.wl = wl
    self.data = data

set_index(keys, *args, **kwargs)

Return a new SpectraFrame with a new index.

Note: inplace is ignored to match SpectraFrame copy semantics.

Source code in pyspc/spectra.py
def set_index(self, keys, *args, **kwargs) -> "SpectraFrame":
    """Return a new SpectraFrame with a new index.

    Note: ``inplace`` is ignored to match SpectraFrame copy semantics.
    """
    kwargs.pop("inplace", None)
    new_data = self.data.set_index(keys, *args, **kwargs)
    return SpectraFrame(spc=self.spc.copy(), wl=self.wl.copy(), data=new_data)

reset_index(*args, **kwargs)

Return a new SpectraFrame with a reset index.

Note: inplace is ignored to match SpectraFrame copy semantics.

Source code in pyspc/spectra.py
def reset_index(self, *args, **kwargs) -> "SpectraFrame":
    """Return a new SpectraFrame with a reset index.

    Note: ``inplace`` is ignored to match SpectraFrame copy semantics.
    """
    kwargs.pop("inplace", None)
    new_data = self.data.reset_index(*args, **kwargs)
    return SpectraFrame(spc=self.spc.copy(), wl=self.wl.copy(), data=new_data)

sort_index(*args, **kwargs)

Return a new SpectraFrame sorted by row index.

Mirrors pandas.DataFrame.sort_index for sorting rows, but always returns a new SpectraFrame and keeps spectra aligned with metadata.

Examples:

>>> sf = SpectraFrame(
...     [[1, 2], [3, 4]],
...     wl=[500, 600],
...     data={"group": ["B", "A"]},
... )
>>> sf.index = [2, 1]
>>> print(sf.sort_index())
   500  600 group
1    3    4     A
2    1    2     B
Source code in pyspc/spectra.py
def sort_index(self, *args, **kwargs) -> "SpectraFrame":
    """Return a new SpectraFrame sorted by row index.

    Mirrors pandas.DataFrame.sort_index for sorting rows, but always returns
    a new SpectraFrame and keeps spectra aligned with metadata.

    Examples
    --------
    >>> sf = SpectraFrame(
    ...     [[1, 2], [3, 4]],
    ...     wl=[500, 600],
    ...     data={"group": ["B", "A"]},
    ... )
    >>> sf.index = [2, 1]
    >>> print(sf.sort_index())
       500  600 group
    1    3    4     A
    2    1    2     B
    """
    kwargs.pop("inplace", None)
    axis = kwargs.pop("axis", 0)
    if axis not in [0, "index"]:
        raise ValueError("SpectraFrame.sort_index only supports axis=0 (rows).")
    ignore_index = kwargs.pop("ignore_index", False)

    # Use stable sorting to preserve order within duplicate indices.
    kwargs.setdefault("kind", "mergesort")

    # Track original row positions to keep alignment with duplicate indices.
    sorted_data = self.data.assign(_pos=np.arange(len(self.data))).sort_index(
        *args, **kwargs
    )
    row_indexer = sorted_data["_pos"].to_numpy()
    new_spc = self.spc[row_indexer, :]
    sorted_data = sorted_data.drop(columns="_pos")

    if ignore_index:
        sorted_data = sorted_data.reset_index(drop=True)

    return SpectraFrame(spc=new_spc, wl=self.wl.copy(), data=sorted_data)

sort_values(by, *args, **kwargs)

Return a new SpectraFrame sorted by row values.

Mirrors pandas.DataFrame.sort_values for sorting rows, but always returns a new SpectraFrame and keeps spectra aligned with metadata.

Examples:

>>> sf = SpectraFrame(
...     [[1, 2], [3, 4]],
...     wl=[500, 600],
...     data={"group": ["B", "A"]},
... )
>>> print(sf.sort_values("group"))
   500  600 group
1    3    4     A
0    1    2     B
>>> sf = SpectraFrame(
...     [[1, 2], [3, 4], [5, 6], [7, 8]],
...     wl=[500, 600],
...     data={
...         "group": ["B", "A", "B", "A"],
...         "score": [1, 2, 3, 4],
...     },
... )
>>> print(sf.sort_values(["group", "score"], ascending=[True, False]))
   500  600 group  score
3    7    8     A      4
1    3    4     A      2
2    5    6     B      3
0    1    2     B      1
Source code in pyspc/spectra.py
def sort_values(self, by, *args, **kwargs) -> "SpectraFrame":
    """Return a new SpectraFrame sorted by row values.

    Mirrors pandas.DataFrame.sort_values for sorting rows, but always returns
    a new SpectraFrame and keeps spectra aligned with metadata.

    Examples
    --------
    >>> sf = SpectraFrame(
    ...     [[1, 2], [3, 4]],
    ...     wl=[500, 600],
    ...     data={"group": ["B", "A"]},
    ... )
    >>> print(sf.sort_values("group"))
       500  600 group
    1    3    4     A
    0    1    2     B
    >>> sf = SpectraFrame(
    ...     [[1, 2], [3, 4], [5, 6], [7, 8]],
    ...     wl=[500, 600],
    ...     data={
    ...         "group": ["B", "A", "B", "A"],
    ...         "score": [1, 2, 3, 4],
    ...     },
    ... )
    >>> print(sf.sort_values(["group", "score"], ascending=[True, False]))
       500  600 group  score
    3    7    8     A      4
    1    3    4     A      2
    2    5    6     B      3
    0    1    2     B      1
    """
    kwargs.pop("inplace", None)
    axis = kwargs.pop("axis", 0)
    if axis not in [0, "index"]:
        raise ValueError("SpectraFrame.sort_values only supports axis=0 (rows).")
    ignore_index = kwargs.pop("ignore_index", False)

    # Track original row positions to keep alignment with duplicate indices.
    sorted_data = self.data.assign(_pos=np.arange(len(self.data))).sort_values(
        by=by, *args, **kwargs
    )
    row_indexer = sorted_data["_pos"].to_numpy()
    new_spc = self.spc[row_indexer, :]
    sorted_data = sorted_data.drop(columns="_pos")

    if ignore_index:
        sorted_data = sorted_data.reset_index(drop=True)

    return SpectraFrame(spc=new_spc, wl=self.wl.copy(), data=sorted_data)

wl_sort(ascending=True, kind='quicksort')

Return a new SpectraFrame with sorted wavelengths.

The wavelength array is sorted, and columns in spc are reordered to stay aligned with the updated wavelength order.

Examples:

>>> sf = SpectraFrame(
...     [[1, 2], [3, 4]],
...     wl=[600, 500],
...     data={"group": ["A", "B"]},
... )
>>> print(sf.wl_sort())
   500  600 group
0    2    1     A
1    4    3     B
Source code in pyspc/spectra.py
def wl_sort(
    self, ascending: bool = True, kind: str = "quicksort"
) -> "SpectraFrame":
    """Return a new SpectraFrame with sorted wavelengths.

    The wavelength array is sorted, and columns in `spc` are reordered to
    stay aligned with the updated wavelength order.

    Examples
    --------
    >>> sf = SpectraFrame(
    ...     [[1, 2], [3, 4]],
    ...     wl=[600, 500],
    ...     data={"group": ["A", "B"]},
    ... )
    >>> print(sf.wl_sort())
       500  600 group
    0    2    1     A
    1    4    3     B
    """
    wl_order = np.argsort(self.wl, kind=kind)
    if not ascending:
        wl_order = wl_order[::-1]
    new_wl = self.wl[wl_order]
    new_spc = self.spc[:, wl_order]
    return SpectraFrame(spc=new_spc, wl=new_wl, data=self.data.copy())

__getitem__(given)

Get a subset of the SpectraFrame

Provides a logic for the [...] operator. Two types of slicing are supported: 1. Single string - returns a corresponding column from the data 2. Tuple of three or four slicers - returns a subset of the SpectraFrame The latter is working similar to hyperSpec package in R. Basically, it allows to slice the data by as sf[rows, cols, wls] or sf[rows, cols, wls, is_iloc] where rows, cols, and wls can be either a single value, a list of values, a slice, or a boolean vector; and is_iloc is a boolean flag to indicate whether the slicing is done by iloc or by label (similar to wl_index in hyperSpec).

Warning

The slicing is behaving like in pandas DataFrame, so the last value in the slice is included in the output.

Parameters:

Name Type Description Default
given Union[str, tuple]

Single string or a tuple of three slicers and an optional flag

required

Returns:

Type Description
Union[Series, SpectraFrame]

Eirther a single column from the data or a subset of the SpectraFrame

Examples:

>>> # Generate a SpectraFrame
>>> spc = np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]])
>>> wl = np.array([400, 500, 600])
>>> data = pd.DataFrame(
...     {"A": [10, 11, 12], "B": [13, 14, 15], "C": [16, 17, 18]},
...     index=[5, 6, 7],
... )
>>> sf = SpectraFrame(spc, wl, data)
>>> print(sf)
   400  500  600   A   B   C
5  1.0  2.0  3.0  10  13  16
6  4.0  5.0  6.0  11  14  17
7  7.0  8.0  9.0  12  15  18
>>> # Get a single column
>>> print(sf["A"])
5    10
6    11
7    12
Name: A, dtype: int64
>>> # Get a subset of the SpectraFrame
>>> print(sf[:5, :, :500])
   400  500   A   B   C
5  1.0  2.0  10  13  16
>>> # Access by iloc indexes
>>> print(sf[:1, :, :1, True])
   400   A   B   C
5  1.0  10  13  16
>>> print(sf[6:, 'B':'C', 500:])
   500  600   B   C
6  5.0  6.0  14  17
7  8.0  9.0  15  18
>>> print(sf[6:, 'B':'C', [400, 600]])
   400  600   B   C
6  4.0  6.0  14  17
7  7.0  9.0  15  18
>>> print(sf[:, :, 400])
   400   A   B   C
5  1.0  10  13  16
6  4.0  11  14  17
7  7.0  12  15  18
>>> print(sf[:, :, 550])
Traceback (most recent call last):
ValueError: Unexpected selector [550]
>>> print(sf[:, :, 510:550])
    A   B   C
5  10  13  16
6  11  14  17
7  12  15  18
>>> print(sf[:, :, 350:450])
   400   A   B   C
5  1.0  10  13  16
6  4.0  11  14  17
7  7.0  12  15  18
Source code in pyspc/spectra.py
def __getitem__(self, given: Union[str, tuple]) -> Union[pd.Series, "SpectraFrame"]:
    """Get a subset of the SpectraFrame

    Provides a logic for the `[...]` operator.
    Two types of slicing are supported:
    1. Single string - returns a corresponding column from the data
    2. Tuple of three or four slicers - returns a subset of the SpectraFrame
    The latter is working similar to `hyperSpec` package in R. Basically,
    it allows to slice the data by as
    `sf[rows, cols, wls]` or `sf[rows, cols, wls, is_iloc]` where `rows`, `cols`,
    and `wls` can be either a single value, a list of values, a slice, or a boolean
    vector; and `is_iloc` is a boolean flag to indicate whether the slicing is
    done by iloc or by label (similar to `wl_index` in `hyperSpec`).

    Warning
    -------
    The slicing is behaving like in `pandas` DataFrame, so the last value
    in the slice is included in the output.

    Parameters
    ----------
    given : Union[str, tuple]
        Single string or a tuple of three slicers and an optional flag

    Returns
    -------
    Union[pd.Series, SpectraFrame]
        Eirther a single column from the data or a subset of the SpectraFrame

    Examples
    --------
    >>> # Generate a SpectraFrame
    >>> spc = np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]])
    >>> wl = np.array([400, 500, 600])
    >>> data = pd.DataFrame(
    ...     {"A": [10, 11, 12], "B": [13, 14, 15], "C": [16, 17, 18]},
    ...     index=[5, 6, 7],
    ... )
    >>> sf = SpectraFrame(spc, wl, data)
    >>> print(sf)
       400  500  600   A   B   C
    5  1.0  2.0  3.0  10  13  16
    6  4.0  5.0  6.0  11  14  17
    7  7.0  8.0  9.0  12  15  18

    >>> # Get a single column
    >>> print(sf["A"])
    5    10
    6    11
    7    12
    Name: A, dtype: int64

    >>> # Get a subset of the SpectraFrame
    >>> print(sf[:5, :, :500])
       400  500   A   B   C
    5  1.0  2.0  10  13  16

    >>> # Access by iloc indexes
    >>> print(sf[:1, :, :1, True])
       400   A   B   C
    5  1.0  10  13  16

    >>> print(sf[6:, 'B':'C', 500:])
       500  600   B   C
    6  5.0  6.0  14  17
    7  8.0  9.0  15  18

    >>> print(sf[6:, 'B':'C', [400, 600]])
       400  600   B   C
    6  4.0  6.0  14  17
    7  7.0  9.0  15  18

    >>> print(sf[:, :, 400])
       400   A   B   C
    5  1.0  10  13  16
    6  4.0  11  14  17
    7  7.0  12  15  18

    >>> print(sf[:, :, 550])
    Traceback (most recent call last):
    ValueError: Unexpected selector [550]

    >>> print(sf[:, :, 510:550])
        A   B   C
    5  10  13  16
    6  11  14  17
    7  12  15  18

    >>> print(sf[:, :, 350:450])
       400   A   B   C
    5  1.0  10  13  16
    6  4.0  11  14  17
    7  7.0  12  15  18
    """
    if isinstance(given, str):
        return self.data[given]

    row_slice, col_slice, wl_slice = self._parse_getitem_tuple(given)
    return SpectraFrame(
        spc=self.spc[row_slice, wl_slice],
        wl=self.wl[wl_slice],
        data=self.data.iloc[row_slice, col_slice],
    )

__setitem__(given, value)

Set values in a subset of the SpectraFrame

Provides a logic for the frame[<given>] = <value> operator. has the same format as in __getitem__ method. The can be either a single value or array-like structure with the same number of elements as the subset of the SpectraFrame.

Warning

Either one of wavelenght or data columns (i.e. second or third slicers) must be :. Otherwise, it is not clear where to put the value. Therefore the method will raise an error in such cases, e.g. sf[:, "a", 400:1000] = 10.

Parameters:

Name Type Description Default
given Union[str, tuple]

Single string or a tuple of three slicers

required
value Any

The value to be set in the subset

required

Examples:

>>> # Generate a SpectraFrame
>>> spc = np.arange(9).reshape(3, 3)
>>> sf = SpectraFrame(spc, [400, 500, 600], {"A": [10, 11, 12]})
>>> print(sf)
   400  500  600   A
0    0    1    2  10
1    3    4    5  11
2    6    7    8  12
>>> # Add a column
>>> sf["B"] = [1, 2, 3]
>>> print(sf)
   400  500  600   A  B
0    0    1    2  10  1
1    3    4    5  11  2
2    6    7    8  12  3
>>> # Edit a column
>>> sf["B"] = [20, 21, 22]
>>> print(sf)
   400  500  600   A   B
0    0    1    2  10  20
1    3    4    5  11  21
2    6    7    8  12  22
>>> # Set a single value
>>> sf[0, :, 500] = 100
>>> print(sf)
   400  500  600   A   B
0    0  100    2  10  20
1    3    4    5  11  21
2    6    7    8  12  22
>>> # Set a subset
>>> sf[1:, :, 500:] = [[200, 201], [300, 301]]
>>> print(sf)
   400  500  600   A   B
0    0  100    2  10  20
1    3  200  201  11  21
2    6  300  301  12  22
>>> # Set a subset with iloc
>>> sf[:2, :, :2, True] = 0
>>> print(sf)
   400  500  600   A   B
0    0    0    2  10  20
1    0    0  201  11  21
2    6  300  301  12  22
>>> # Invalid selector
>>> sf[:, ["A", "B"], :500] = 0
Traceback (most recent call last):
ValueError: Invalid slicing...
Source code in pyspc/spectra.py
def __setitem__(self, given: Union[str, tuple], value: Any) -> None:
    """Set values in a subset of the SpectraFrame

    Provides a logic for the `frame[<given>] = <value>` operator.
    <given> has the same format as in `__getitem__` method. The <value>
    can be either a single value or array-like structure with the same
    number of elements as the subset of the SpectraFrame.

    Warning
    -------
    Either one of wavelenght or data columns (i.e. second or third slicers)
    must be `:`. Otherwise, it is not clear where to put the value.
    Therefore the method will raise an error in such cases,
    e.g. `sf[:, "a", 400:1000] = 10`.


    Parameters
    ----------
    given : Union[str, tuple]
        Single string or a tuple of three slicers
    value : Any
        The value to be set in the subset

    Examples
    --------
    >>> # Generate a SpectraFrame
    >>> spc = np.arange(9).reshape(3, 3)
    >>> sf = SpectraFrame(spc, [400, 500, 600], {"A": [10, 11, 12]})
    >>> print(sf)
       400  500  600   A
    0    0    1    2  10
    1    3    4    5  11
    2    6    7    8  12

    >>> # Add a column
    >>> sf["B"] = [1, 2, 3]
    >>> print(sf)
       400  500  600   A  B
    0    0    1    2  10  1
    1    3    4    5  11  2
    2    6    7    8  12  3

    >>> # Edit a column
    >>> sf["B"] = [20, 21, 22]
    >>> print(sf)
       400  500  600   A   B
    0    0    1    2  10  20
    1    3    4    5  11  21
    2    6    7    8  12  22

    >>> # Set a single value
    >>> sf[0, :, 500] = 100
    >>> print(sf)
       400  500  600   A   B
    0    0  100    2  10  20
    1    3    4    5  11  21
    2    6    7    8  12  22

    >>> # Set a subset
    >>> sf[1:, :, 500:] = [[200, 201], [300, 301]]
    >>> print(sf)
       400  500  600   A   B
    0    0  100    2  10  20
    1    3  200  201  11  21
    2    6  300  301  12  22

    >>> # Set a subset with iloc
    >>> sf[:2, :, :2, True] = 0
    >>> print(sf)
       400  500  600   A   B
    0    0    0    2  10  20
    1    0    0  201  11  21
    2    6  300  301  12  22

    >>> # Invalid selector
    >>> sf[:, ["A", "B"], :500] = 0
    Traceback (most recent call last):
    ValueError: Invalid slicing...
    """
    if isinstance(given, str):
        self.data.loc[:, given] = value
        return

    row_slice, col_slice, wl_slice = self._parse_getitem_tuple(given)
    if _is_empty_slice(col_slice) and not _is_empty_slice(wl_slice):
        self.spc[row_slice, wl_slice] = value
    elif not _is_empty_slice(col_slice) and _is_empty_slice(wl_slice):
        self.data.iloc[row_slice, col_slice] = value
    else:
        raise ValueError(
            "Invalid slicing. Either data columns or "
            "wavelengths indexes must be `:`"
        )

query(expr)

Filter spectra using pandas DataFrame.query

Parameters:

Name Type Description Default
expr str

Query expression

required

Returns:

Type Description
SpectraFrame

A new SpectraFrame with the filtered data

Examples:

>>> np.random.seed(42)
>>> sf = SpectraFrame(np.random.rand(4, 5), data={"group": list("AABB")})
>>> print(sf)
          0  ...         4 group
0  0.374540  ...  0.156019     A
1  0.155995  ...  0.708073     A
2  0.020584  ...  0.181825     B
3  0.183405  ...  0.291229     B
>>> sf.query("group == 'A'")
          0  ...         4 group
0  0.374540  ...  0.156019     A
1  0.155995  ...  0.708073     A
Source code in pyspc/spectra.py
def query(self, expr: str) -> "SpectraFrame":
    """Filter spectra using pandas DataFrame.query

    Parameters
    -----------
    expr : str
        Query expression

    Returns
    -------
    SpectraFrame
        A new SpectraFrame with the filtered data

    Examples
    --------
    >>> np.random.seed(42)
    >>> sf = SpectraFrame(np.random.rand(4, 5), data={"group": list("AABB")})
    >>> print(sf)
              0  ...         4 group
    0  0.374540  ...  0.156019     A
    1  0.155995  ...  0.708073     A
    2  0.020584  ...  0.181825     B
    3  0.183405  ...  0.291229     B
    >>> sf.query("group == 'A'")
              0  ...         4 group
    0  0.374540  ...  0.156019     A
    1  0.155995  ...  0.708073     A
    """
    indices = self.data.query(expr).index
    return self[indices, :, :]

assign(**kwargs)

Assign new columns to a SpectraFrame.

Returns a new SpectraFrame with the assigned columns.

Parameters:

Name Type Description Default
**kwargs

Column assignments, same as pandas DataFrame.assign()

{}

Returns:

Type Description
SpectraFrame

A new SpectraFrame with the assigned columns

Examples:

np.random.seed(42) sf = SpectraFrame(np.random.rand(4, 5), data={"group": list("AABB")}) print(sf) 0 ... 4 group 0 0.374540 ... 0.156019 A 1 0.155995 ... 0.708073 A 2 0.020584 ... 0.181825 B 3 0.183405 ... 0.291229 B sf_new = sf.assign(new_col=lambda x: x.group == "A") print(sf_new) 0 ... 4 group new_col 0 0.374540 ... 0.156019 A True 1 0.155995 ... 0.708073 A True 2 0.020584 ... 0.181825 B False 3 0.183405 ... 0.291229 B False

Source code in pyspc/spectra.py
def assign(self, **kwargs) -> "SpectraFrame":
    """Assign new columns to a SpectraFrame.

    Returns a new SpectraFrame with the assigned columns.

    Parameters
    ----------
    **kwargs
        Column assignments, same as pandas DataFrame.assign()

    Returns
    -------
    SpectraFrame
        A new SpectraFrame with the assigned columns

    Examples:
    --------
    >>> np.random.seed(42)
    >>> sf = SpectraFrame(np.random.rand(4, 5), data={"group": list("AABB")})
    >>> print(sf)
              0  ...         4 group
    0  0.374540  ...  0.156019     A
    1  0.155995  ...  0.708073     A
    2  0.020584  ...  0.181825     B
    3  0.183405  ...  0.291229     B
    >>> sf_new = sf.assign(new_col=lambda x: x.group == "A")
    >>> print(sf_new)
              0  ...         4 group  new_col
    0  0.374540  ...  0.156019     A     True
    1  0.155995  ...  0.708073     A     True
    2  0.020584  ...  0.181825     B    False
    3  0.183405  ...  0.291229     B    False
    """
    new_sf = self.copy()
    new_sf.data = new_sf.data.assign(**kwargs)
    return new_sf

drop(columns)

Drop specified columns from the SpectraFrame.

Returns a new SpectraFrame with the specified columns dropped.

Parameters:

Name Type Description Default
columns str or list of str

Column name(s) to drop from the data

required

Returns:

Type Description
SpectraFrame

A new SpectraFrame with specified columns dropped

Examples:

>>> np.random.seed(42)
>>> sf = SpectraFrame(
...     np.random.rand(4, 5),
...     data={"group": list("AABB"), "type": list("XYXY")}
... )
>>> print(sf)
          0  ...         4 group type
0  0.374540  ...  0.156019     A    X
1  0.155995  ...  0.708073     A    Y
2  0.020584  ...  0.181825     B    X
3  0.183405  ...  0.291229     B    Y
>>> sf_new = sf.drop("type")
>>> print(sf_new)
          0  ...         4 group
0  0.374540  ...  0.156019     A
1  0.155995  ...  0.708073     A
2  0.020584  ...  0.181825     B
3  0.183405  ...  0.291229     B
>>> sf_new2 = sf.drop(["group", "type"])
>>> print(sf_new2)
          0  ...         4
0  0.374540  ...  0.156019
1  0.155995  ...  0.708073
2  0.020584  ...  0.181825
3  0.183405  ...  0.291229
Source code in pyspc/spectra.py
def drop(self, columns) -> "SpectraFrame":
    """Drop specified columns from the SpectraFrame.

    Returns a new SpectraFrame with the specified columns dropped.

    Parameters
    ----------
    columns : str or list of str
        Column name(s) to drop from the data

    Returns
    -------
    SpectraFrame
        A new SpectraFrame with specified columns dropped

    Examples
    --------
    >>> np.random.seed(42)
    >>> sf = SpectraFrame(
    ...     np.random.rand(4, 5),
    ...     data={"group": list("AABB"), "type": list("XYXY")}
    ... )
    >>> print(sf)
              0  ...         4 group type
    0  0.374540  ...  0.156019     A    X
    1  0.155995  ...  0.708073     A    Y
    2  0.020584  ...  0.181825     B    X
    3  0.183405  ...  0.291229     B    Y
    >>> sf_new = sf.drop("type")
    >>> print(sf_new)
              0  ...         4 group
    0  0.374540  ...  0.156019     A
    1  0.155995  ...  0.708073     A
    2  0.020584  ...  0.181825     B
    3  0.183405  ...  0.291229     B
    >>> sf_new2 = sf.drop(["group", "type"])
    >>> print(sf_new2)
              0  ...         4
    0  0.374540  ...  0.156019
    1  0.155995  ...  0.708073
    2  0.020584  ...  0.181825
    3  0.183405  ...  0.291229
    """
    new_sf = self.copy()
    new_sf.data = new_sf.data.drop(columns=columns)
    return new_sf

__array__()

Return spectral data when converted to numpy array

This method is called when np.array(sf) is used on a SpectraFrame object.

Returns:

Type Description
ndarray

The spectral data array (self.spc)

Source code in pyspc/spectra.py
def __array__(self) -> np.ndarray:
    """Return spectral data when converted to numpy array

    This method is called when np.array(sf) is used on a SpectraFrame object.

    Returns
    -------
    np.ndarray
        The spectral data array (self.spc)
    """
    return self.spc

wl_resample(new_wl, method='interp1d', **kwargs)

Resample wavelengths, i.e. shift wavelenghts with interpolation

Parameters:

Name Type Description Default
new_wl ndarray

New wavenumbers

required
method str

Method for interpolation. Currently only "interp1d" is supported. Which is using scipy.interpolate.interp1d function.

'interp1d'
kwargs dict

Additional parameters to be passed to the interpolator function. See scipy.interpolate.interp1d docs for more details.

{}

Returns:

Type Description
SpectraFrame

A new SpectraFrame object with new_wl as wavenumbers, and interpolated signal values as spectral data. *.data part remains the same.

Raises:

Type Description
NotImplementedError

Unimplemented method of interpolation.

Source code in pyspc/spectra.py
def wl_resample(
    self, new_wl: np.ndarray, method="interp1d", **kwargs
) -> "SpectraFrame":
    """Resample wavelengths, i.e. shift wavelenghts with interpolation

    Parameters
    ----------
    new_wl : np.ndarray
        New wavenumbers
    method : str, optional
        Method for interpolation. Currently only "interp1d" is supported.
        Which is using `scipy.interpolate.interp1d` function.
    kwargs : dict, optional
        Additional parameters to be passed to the interpolator function.
        See `scipy.interpolate.interp1d` docs for more details.

    Returns
    -------
    SpectraFrame
        A new SpectraFrame object with `new_wl` as wavenumbers, and
        interpolated signal values as spectral data. `*.data` part
        remains the same.

    Raises
    ------
    NotImplementedError
        Unimplemented method of interpolation.
    """
    if method == "interp1d":
        interpolator = scipy.interpolate.interp1d(x=self.wl, y=self.spc, **kwargs)
        new_spc = interpolator(new_wl)
    else:
        raise NotImplementedError("Other methods not available yet")

    return SpectraFrame(new_spc, wl=new_wl, data=self.data)

resample_wl(new_wl, method='interp1d', **kwargs)

Resample wavelengths (deprecated name for wl_resample).

This method is kept for backward compatibility. Use wl_resample.

Source code in pyspc/spectra.py
def resample_wl(
    self, new_wl: np.ndarray, method="interp1d", **kwargs
) -> "SpectraFrame":
    """Resample wavelengths (deprecated name for ``wl_resample``).

    This method is kept for backward compatibility. Use ``wl_resample``.
    """
    warnings.warn(
        "resample_wl is deprecated; use wl_resample instead.",
        DeprecationWarning,
        stacklevel=2,
    )
    return self.wl_resample(new_wl, method=method, **kwargs)

apply(func, *args, groupby=None, axis=0, **kwargs)

Apply function to the spectral data

Parameters:

Name Type Description Default
func Union[str, callable]

Either a string with the name of numpy funciton, e.g "max", "mean", etc. Or a callable function that can be passed to numpy.apply_along_axis

required
groupby Union[str, list[str]]

Single or list of data column names to use for grouping the data. By default None, so the function applied to the all spectral data.

None
axis int

Standard axis. Same as in numpy or pandas, by default 1 when groupby is not provided, and 0 when provided.

0

Returns:

Type Description
SpectraFrame

Output spectral frame where * out.spc is the results of func * out.wl either the same (axis=0 OR axis=1 and nwl matches) or range 0..N (axis=1 and nwl does not match) * out.data The same if axis=1. If axis=0, either empty (no grouping) or represents the grouping.

Source code in pyspc/spectra.py
def apply(
    self,
    func: Union[str, Callable],
    *args,
    groupby: Union[str, list[str], None] = None,
    axis: int = 0,
    **kwargs,
) -> "SpectraFrame":
    """Apply function to the spectral data

    Parameters
    ----------
    func : Union[str, callable]
        Either a string with the name of numpy funciton, e.g "max", "mean", etc.
        Or a callable function that can be passed to `numpy.apply_along_axis`
    groupby : Union[str, list[str]], optional
        Single or list of `data` column names to use for grouping the data.
        By default None, so the function applied to the all spectral data.
    axis : int, optional
         Standard axis. Same as in `numpy` or `pandas`, by default 1 when groupby
         is not provided, and 0 when provided.

    Returns
    -------
    SpectraFrame
        Output spectral frame where
        * `out.spc` is the results of `func`
        * `out.wl` either the same (axis=0 OR axis=1 and `nwl` matches)
          or range 0..N (axis=1 and `nwl` does not match)
        * `out.data` The same if axis=1. If axis=0, either empty (no grouping)
            or represents the grouping.
    """

    # Prepare arguments
    axis = self._get_axis(axis, groupby)
    groupby = self._get_groupby(groupby)

    # Prepare default values
    new_wl = self.wl if axis == 0 else None
    new_data = self.data if axis == 1 else None

    if groupby is None:
        new_spc = self._apply_func(func, *args, axis=axis, **kwargs)
    else:
        # Prepare a dataframe for groupby aggregation
        grouped = self.to_pandas().groupby(groupby, observed=True)[self.wl]

        # Prepare list of group names as dicts {'column name': 'column value', ...}
        keys = [i for i, _ in grouped]
        groups = [dict(zip(groupby, gr)) for gr in keys]

        # Apply to each group
        spc_list = [
            self._apply_func(func, *args, data=group.values, axis=0, **kwargs)
            for _, group in grouped
        ]
        data_list = [
            pd.DataFrame({**gr, "group_index": range(spc_list[i].shape[0])})
            for i, gr in enumerate(groups)
        ]

        # Combine
        new_spc = np.concatenate(spc_list, axis=0)
        new_data = pd.concat(data_list, axis=0, ignore_index=True)

    # If the applied function returns same number of wavelenghts
    # we assume that wavelengths are the same, e.g. baseline,
    # smoothing, etc.
    if (new_wl is None) and (new_spc.shape[1] == self.nwl):
        new_wl = self.wl

    return SpectraFrame(new_spc, wl=new_wl, data=new_data)

area()

Calculate area under the spectra

Source code in pyspc/spectra.py
def area(self) -> "SpectraFrame":
    """Calculate area under the spectra"""
    return SpectraFrame(
        scipy.integrate.trapezoid(self.spc, x=self.wl, axis=1).reshape((-1, 1)),
        wl=None,
        data=self.data,
    )

rearrange(pattern, fill_value=None, **grid_values)

Rearrange spectra into a dense multidimensional tensor via einops patterns.

This is intended for hyperspectral images and other gridded measurements where sample coordinates are stored as columns in sf.data (e.g. y, x, z, time, batch) and wavelengths are stored in sf.wl. One common use case is reshaping unfolded 2D spectra data matrix into a hyperspectral cube with shape (y, x, wl) or (batch, y, x, wl).

Parameters:

Name Type Description Default
pattern str

Einops-style output pattern. Must include wl, e.g. "batch y x wl" or "(batch y) x wl".

required
fill_value Optional[float]

Fill missing coordinate combinations (ragged grids) with this value. If None (default), missing entries are filled with NaNs (np.nan).

None
**grid_values dict[str, list[Any]]

Optional grid specifications. For each axis an explicit ordered list of axis values (e.g. x=[0, 1, 2, 3]).

{}

Returns:

Type Description
ndarray

A dense tensor matching the requested pattern.

Raises:

Type Description
ValueError

If the pattern is invalid or incompatible with the data.

NotImplementedError

If the pattern includes unsupported features such as ellipsis (...).

Notes

If padding is applied, the output dtype may be promoted to accommodate fill_value (e.g. integer spectra padded with np.nan become floats).

Examples:

>>> np.random.seed(42)
>>> sf = SpectraFrame(
...     spc=np.arange(3*5).reshape((3, 5)),
...     wl=np.array([400, 500, 600, 700, 800]),
...     data=pd.DataFrame({
...         "y": [1, 0, 0],
...         "x": [0, 0, 1]
...     })
... )
>>> print(sf)
   400  500  600  700  800  y  x
0    0    1    2    3    4  1  0
1    5    6    7    8    9  0  0
2   10   11   12   13   14  0  1
>>> cube = sf.rearrange(pattern="y x wl", fill_value=np.nan)
>>> print(cube.shape)
(2, 2, 5)
>>> print(cube[:,:,0]) # wl=400 slice
[[ 5. 10.]
 [ 0. nan]]
Source code in pyspc/spectra.py
def rearrange(
    self,
    pattern: str,
    fill_value: Optional[float] = None,
    **grid_values: dict[str, list[Any]],
) -> np.ndarray:
    """Rearrange spectra into a dense multidimensional tensor via einops patterns.

    This is intended for hyperspectral images and other gridded measurements where
    sample coordinates are stored as columns in ``sf.data`` (e.g. ``y``, ``x``,
    ``z``, ``time``, ``batch``) and wavelengths are stored in ``sf.wl``. One
    common use case is reshaping unfolded 2D spectra data matrix into a
    hyperspectral cube with shape ``(y, x, wl)`` or ``(batch, y, x, wl)``.

    Parameters
    ----------
    pattern : str
        Einops-style *output* pattern. Must include ``wl``, e.g.
        ``"batch y x wl"`` or ``"(batch y) x wl"``.
    fill_value : Optional[float], optional
        Fill missing coordinate combinations (ragged grids) with this value. If
        None (default), missing entries are filled with NaNs (``np.nan``).
    **grid_values: dict[str, list[Any]]
        Optional grid specifications. For each axis an explicit ordered
        list of axis values (e.g. ``x=[0, 1, 2, 3]``).

    Returns
    -------
    np.ndarray
        A dense tensor matching the requested pattern.

    Raises
    ------
    ValueError
        If the pattern is invalid or incompatible with the data.
    NotImplementedError
        If the pattern includes unsupported features such as ellipsis (...).

    Notes
    -----
    If padding is applied, the output dtype may be promoted to accommodate
    ``fill_value`` (e.g. integer spectra padded with ``np.nan`` become floats).

    Examples
    --------
    >>> np.random.seed(42)
    >>> sf = SpectraFrame(
    ...     spc=np.arange(3*5).reshape((3, 5)),
    ...     wl=np.array([400, 500, 600, 700, 800]),
    ...     data=pd.DataFrame({
    ...         "y": [1, 0, 0],
    ...         "x": [0, 0, 1]
    ...     })
    ... )
    >>> print(sf)
       400  500  600  700  800  y  x
    0    0    1    2    3    4  1  0
    1    5    6    7    8    9  0  0
    2   10   11   12   13   14  0  1

    >>> cube = sf.rearrange(pattern="y x wl", fill_value=np.nan)
    >>> print(cube.shape)
    (2, 2, 5)
    >>> print(cube[:,:,0]) # wl=400 slice
    [[ 5. 10.]
     [ 0. nan]]
    """
    einops = _require_einops()
    sorted_spc, einops_pattern, sizes = self._prepare_for_einops(
        "rearrange", pattern, fill_value=fill_value, **grid_values
    )

    # Validate: total size matches number of spectra
    sizes.pop("wl", None)  # wl is not counted in total size
    if np.prod(list(sizes.values())) != len(sorted_spc):
        raise ValueError(
            "Cannot reshape: number of spectra does not match the implied grid "
            "size. Ensure coordinate tuples are unique and that the requested "
            "grid axes match the available metadata."
        )

    # Rest of validation and rearrangement is done on the einops side
    return einops.rearrange(sorted_spc, einops_pattern, **sizes)

reduce(reducer, pattern, *, ignore_na=False, fill_value=None, **grid_values)

Reduce spectra along axes implied by an einops-style output pattern.

The pattern uses metadata axes from sf.data and may include wl (to keep spectra) or omit it (to reduce over wavelengths). When the output is 2D with wl as the last axis the result is equivalent to SpectraFrame.apply(reducer, groupby=...).spc.

Parameters:

Name Type Description Default
reducer Union[str, Callable]

Reduction to apply. Supported strings: "mean", "sum", "min", "max", "std", "median". Callables are also supported.

required
pattern str

Einops-style output pattern.

required
ignore_na bool

Use NaN-aware reductions for supported string reducers. Defaults to False.

False
fill_value Optional[float]

When returning an array, fill missing coordinate combinations in reshaping. If None (default), missing entries are filled with NaNs (np.nan).

None
**grid_values Any

Optional grid specifications. For each axis an explicit ordered list of axis values (e.g. x=[0, 1, 2, 3]). NOTE: At the moment, the order of values is not preserved in the output tensor, the values are always sorted. This may change in future releases.

{}

Returns:

Type Description
ndarray

Reduced array matching the requested pattern.

Notes

If padding is applied, the output dtype may be promoted to accommodate fill_value (e.g. integer spectra padded with np.nan become floats).

Examples:

>>> np.random.seed(42)
>>> sf = SpectraFrame(
...     spc=np.arange(6*5).reshape((6, 5)),
...     wl=np.array([400, 500, 600, 700, 800]),
...     data=pd.DataFrame({
...         "y": [1, 0, 1, 0, 1, 1],
...         "x": [0, 0, 1, 1, 0, 1],
...         "batch": [0, 0, 0, 1, 1, 1]
...     })
... )
>>> print(sf)
   400  500  600  700  800  y  x  batch
0    0    1    2    3    4  1  0      0
1    5    6    7    8    9  0  0      0
2   10   11   12   13   14  1  1      0
3   15   16   17   18   19  0  1      1
4   20   21   22   23   24  1  0      1
5   25   26   27   28   29  1  1      1
>>> # Reduce to mean spectra per pixel (y, x)
>>> reduced = sf.reduce(reducer="mean", pattern="y x wl", fill_value=np.nan)
>>> print(reduced.shape)
(2, 2, 5)
>>> print(reduced[:,:,0]) # wl=400 slice
[[ nan  nan]
 [10.  17.5]]
>>> # Ignore NaNs and reduce to sum spectra per batch
>>> reduced = sf.reduce(
...     reducer="mean",
...     pattern="y x wl",
...     fill_value=np.nan,
...     ignore_na=True
... )
>>> print(reduced[:,:,0]) # wl=400 slice
[[ 5.  15. ]
 [10.  17.5]]
Source code in pyspc/spectra.py
def reduce(
    self,
    reducer: Union[str, Callable],
    pattern: str,
    *,
    ignore_na: bool = False,
    fill_value: Optional[float] = None,
    **grid_values: Any,
) -> Union[np.ndarray, "SpectraFrame"]:
    """Reduce spectra along axes implied by an einops-style output pattern.

    The pattern uses metadata axes from ``sf.data`` and may include ``wl`` (to keep
    spectra) or omit it (to reduce over wavelengths). When the output is 2D with
    ``wl`` as the last axis the result is equivalent to
    ``SpectraFrame.apply(reducer, groupby=...).spc``.

    Parameters
    ----------
    reducer : Union[str, Callable]
        Reduction to apply. Supported strings: ``"mean"``, ``"sum"``, ``"min"``,
        ``"max"``, ``"std"``, ``"median"``. Callables are also supported.
    pattern : str
        Einops-style *output* pattern.
    ignore_na : bool, optional
        Use NaN-aware reductions for supported string reducers. Defaults to False.
    fill_value : Optional[float], optional
        When returning an array, fill missing coordinate combinations in reshaping.
        If None (default), missing entries are filled with NaNs (``np.nan``).
    **grid_values: dict[str, list[Any]]
        Optional grid specifications. For each axis an explicit ordered
        list of axis values (e.g. ``x=[0, 1, 2, 3]``).
        NOTE: At the moment, the order of values is not preserved in the output
        tensor, the values are always sorted. This may change in future releases.

    Returns
    -------
    np.ndarray
        Reduced array matching the requested pattern.

    Notes
    -----
    If padding is applied, the output dtype may be promoted to accommodate
    ``fill_value`` (e.g. integer spectra padded with ``np.nan`` become floats).

    Examples
    --------
    >>> np.random.seed(42)
    >>> sf = SpectraFrame(
    ...     spc=np.arange(6*5).reshape((6, 5)),
    ...     wl=np.array([400, 500, 600, 700, 800]),
    ...     data=pd.DataFrame({
    ...         "y": [1, 0, 1, 0, 1, 1],
    ...         "x": [0, 0, 1, 1, 0, 1],
    ...         "batch": [0, 0, 0, 1, 1, 1]
    ...     })
    ... )
    >>> print(sf)
       400  500  600  700  800  y  x  batch
    0    0    1    2    3    4  1  0      0
    1    5    6    7    8    9  0  0      0
    2   10   11   12   13   14  1  1      0
    3   15   16   17   18   19  0  1      1
    4   20   21   22   23   24  1  0      1
    5   25   26   27   28   29  1  1      1

    >>> # Reduce to mean spectra per pixel (y, x)
    >>> reduced = sf.reduce(reducer="mean", pattern="y x wl", fill_value=np.nan)
    >>> print(reduced.shape)
    (2, 2, 5)
    >>> print(reduced[:,:,0]) # wl=400 slice
    [[ nan  nan]
     [10.  17.5]]

    >>> # Ignore NaNs and reduce to sum spectra per batch
    >>> reduced = sf.reduce(
    ...     reducer="mean",
    ...     pattern="y x wl",
    ...     fill_value=np.nan,
    ...     ignore_na=True
    ... )
    >>> print(reduced[:,:,0]) # wl=400 slice
    [[ 5.  15. ]
     [10.  17.5]]
    """

    einops = _require_einops()
    sorted_spc, einops_pattern, sizes = self._prepare_for_einops(
        "reduce", pattern, fill_value=fill_value, **grid_values
    )

    # Parse reducer
    if isinstance(reducer, str):
        reducer_key = reducer.lower()
        reducer_prefix = "nan" if ignore_na else ""
        numpy_func_name = f"{reducer_prefix}{reducer_key}"
        reducer_names = ["mean", "sum", "min", "max", "std", "median"]
        if not hasattr(np, numpy_func_name) or reducer_key not in reducer_names:
            raise ValueError(
                "Unsupported reducer. Expected one of "
                "['mean', 'sum', 'min', 'max', 'std', 'median'], "
                f"got {reducer!r}."
            )
        func: Callable = getattr(np, numpy_func_name)
    elif callable(reducer):
        func: Callable = reducer
    else:
        raise ValueError("Reducer must be either a string or a callable.")

    return einops.reduce(
        sorted_spc,
        einops_pattern,
        func,
        **sizes,
    )

normalize(method, ignore_na=True, peak_range=None, **kwargs)

Dispatcher for spectra normalization

Parameters:

Name Type Description Default
method str

Method of normaliztion. Available options: '01', 'area', 'vector', 'mean', 'peak' (normalize by peak value in the given range). By default, peak value is approximated by the maximum value in the given range. To use a different method, use the **kwargs to pass to around_max_peak_fit function.

required
ignore_na bool

Ignore NaN values in the data, by default True

True
peak_range tuple[int]

Range of wavelength/wavenumber to use for peak normalization. If None (default), the whole range is used.

None

Returns:

Type Description
SpectraFrame

A new SpectraFrame with normalized values

Raises:

Type Description
NotImplementedError

Unknown or not implemented methods, e.g. peak normalization

Source code in pyspc/spectra.py
def normalize(
    self,
    method: str,
    ignore_na: bool = True,
    peak_range: Optional[tuple[float, float]] = None,
    **kwargs,
) -> "SpectraFrame":
    """Dispatcher for spectra normalization

    Parameters
    ----------
    method : str
        Method of normaliztion. Available options: '01', 'area', 'vector', 'mean',
        'peak' (normalize by peak value in the given range). By default, peak value
        is approximated by the maximum value in the given range. To use a different
        method, use the `**kwargs` to pass to `around_max_peak_fit` function.
    ignore_na : bool, optional
        Ignore NaN values in the data, by default True
    peak_range : tuple[int], optional
        Range of wavelength/wavenumber to use for peak normalization.
        If None (default), the whole range is used.

    Returns
    -------
    SpectraFrame
        A new SpectraFrame with normalized values

    Raises
    ------
    NotImplementedError
        Unknown or not implemented methods, e.g. peak normalization
    """
    spc = self.copy()
    if method == "01":
        spc = spc - spc.min(axis=1, ignore_na=ignore_na)
        spc = spc / spc.max(axis=1, ignore_na=ignore_na)
    elif method == "area":
        spc = spc / spc.area()
    elif method == "peak":
        if peak_range is None:
            peak_range: tuple[float, float] = (self.wl[0], self.wl[-1])

        peak_intensities = around_max_peak_fit(
            x=self[:, :, peak_range[0] : peak_range[1]].wl,
            y=self[:, :, peak_range[0] : peak_range[1]].spc,
            **kwargs,
        )
        spc = spc / peak_intensities.y_max.values.reshape((spc.nspc, -1))
    elif method == "vector":
        if ignore_na:
            spc = spc / np.sqrt(
                np.nansum(np.power(spc.spc, 2), axis=1, keepdims=True)
            )
        else:
            spc = spc / np.sqrt(np.sum(np.power(spc.spc, 2), axis=1, keepdims=True))
    elif method == "mean":
        spc = spc / spc.mean(axis=1, ignore_na=ignore_na)
    else:
        raise ValueError("Unknown normalization method")

    return spc

smooth(method='savgol', **kwargs)

Dispatcher for spectra smoothing

Parameters:

Name Type Description Default
method str

Method of smoothing. Currently, only "savgol" is avalialbe

'savgol'
kwargs dict

Additional parameters to pass to the smoothing method

{}

Returns:

Type Description
SpectraFrame

A new frame with smoothed values

Raises:

Type Description
NotImplementedError

Unknown or unimplemented smoothing method

Source code in pyspc/spectra.py
def smooth(self, method: str = "savgol", **kwargs) -> "SpectraFrame":
    """Dispatcher for spectra smoothing

    Parameters
    ----------
    method : str, optional
        Method of smoothing. Currently, only "savgol" is avalialbe
    kwargs : dict
        Additional parameters to pass to the smoothing method

    Returns
    -------
    SpectraFrame
        A new frame with smoothed values

    Raises
    ------
    NotImplementedError
        Unknown or unimplemented smoothing method
    """
    spc = self.copy()
    if method == "savgol":
        spc.spc = scipy.signal.savgol_filter(spc.spc, **kwargs)
    else:
        raise NotImplementedError("Method is not implemented yet")

    return spc

baseline(method, **kwargs)

Dispatcher for spectra baseline estimation

Dispatches baseline correction to the corresponding method in pybaselines package. In addition, "rubberband" method is available.

Parameters:

Name Type Description Default
method str

A name of the method in pybaselines package (e.g. "airpls", "snip"), or "rubberband"

required
kwargs

Additional parameters to pass to the baseline correction method

{}

Returns:

Type Description
SpectraFrame

A frame of estimated baselines

Raises:

Type Description
ValueError

Unknown baseline method provided

Source code in pyspc/spectra.py
def baseline(self, method: str, **kwargs) -> "SpectraFrame":
    """Dispatcher for spectra baseline estimation

    Dispatches baseline correction to the corresponding method
    in `pybaselines` package.
    In addition, "rubberband" method is available.

    Parameters
    ----------
    method : str
        A name of the method in `pybaselines` package (e.g. "airpls", "snip"),
        or "rubberband"
    kwargs: dict
        Additional parameters to pass to the baseline correction method

    Returns
    -------
    SpectraFrame
        A frame of estimated baselines

    Raises
    ------
    ValueError
        Unknown baseline method provided
    """
    baseline_fitter = pybaselines.Baseline(x_data=self.wl)
    if hasattr(baseline_fitter, method):
        baseline_method = getattr(baseline_fitter, method)
        baseline_func = lambda y: baseline_method(y, **kwargs)[0]
    elif method == "rubberband":
        baseline_func = lambda y: rubberband(self.wl, y, **kwargs)
    else:
        raise ValueError(
            "Unknown method. Method must be either "
            "from `pybaselines` or 'rubberband'"
        )
    return self.apply(baseline_func, axis=1)

sbaseline(method, **kwargs)

Subtract baseline from the spectra

Same as .baseline(), but returns a new frame with subtracted baseline. A shortcut for SpectraFrame - SpectraFrame.baseline(...), allowing to chain methods, e.g. sf.smooth().sbaseline("snip").normalize().

Source code in pyspc/spectra.py
def sbaseline(self, method: str, **kwargs) -> "SpectraFrame":
    """Subtract baseline from the spectra

    Same as `.baseline()`, but returns a new frame with subtracted baseline.
    A shortcut for `SpectraFrame - SpectraFrame.baseline(...)`, allowing
    to chain methods, e.g. `sf.smooth().sbaseline("snip").normalize()`.
    """
    return self - self.baseline(method, **kwargs).spc

to_pandas(multiindex=False, string_names=False)

Convert to a pandas DataFrame

Parameters:

Name Type Description Default
multiindex bool

Adds an index level to columns separating spectral data (spc) from meta data (data), by default False

False

Returns:

Type Description
DataFrame

Dataframe where spectral data is combined with meta data. Wavelengths are used as column names for spectral data part.

Source code in pyspc/spectra.py
def to_pandas(self, multiindex=False, string_names=False) -> pd.DataFrame:
    """Convert to a pandas DataFrame

    Parameters
    ----------
    multiindex : bool, optional
        Adds an index level to columns separating spectral data (`spc`) from
        meta data (`data`), by default False

    Returns
    -------
    pd.DataFrame
        Dataframe where spectral data is combined with meta data.
        Wavelengths are used as column names for spectral data part.
    """
    df = pd.DataFrame(self.spc, columns=self.wl, index=self.data.index)
    if not self.data.empty:
        df = pd.concat([df, self.data], axis=1)

    if string_names:
        df.columns = df.columns.map(str)

    if multiindex:
        df.columns = pd.MultiIndex.from_tuples(
            [("spc", wl) for wl in df.columns[: self.nwl]]
            + [("data", col) for col in df.columns[self.nwl :]]
        )

    return df

__sizeof__()

Estimate the total memory usage

Source code in pyspc/spectra.py
def __sizeof__(self):
    """Estimate the total memory usage"""
    return self.spc.__sizeof__() + self.data.__sizeof__() + self.wl.__sizeof__()