-
Notifications
You must be signed in to change notification settings - Fork 881
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Maintain synchronicity between the lexer and the parser #11457
Open
dhruvmanila
wants to merge
30
commits into
main
Choose a base branch
from
dhruv/parser-phase-2
base: main
Could not load branches
Branch not found: {{ refName }}
Could not load tags
Nothing to show
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+8,082
−6,054
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
dhruvmanila
force-pushed
the
dhruv/parser-phase-2
branch
2 times, most recently
from
May 20, 2024 06:28
30ecb9c
to
bd86d41
Compare
dhruvmanila
force-pushed
the
dhruv/parser-phase-2
branch
6 times, most recently
from
May 30, 2024 06:34
06d6feb
to
4da0675
Compare
CodSpeed Performance ReportMerging #11457 will improve performances by 27.98%Comparing Summary
Benchmarks breakdown
|
dhruvmanila
force-pushed
the
dhruv/parser-phase-2
branch
from
May 31, 2024 03:18
9ef4fb5
to
3597372
Compare
|
This PR updates the `Lexer` to make it lazy in the sense that the tokens are emitted only when it's requested. * Remove `Iterator` implementation * Remove `SoftkeywordTransformer` * Collect all `LexicalError` in the lexer and return it on `finish` call * Store the `current` token and provide methods to query it * Implement a new `TokenValue` struct which stores the owned value for certain tokens [^1] * Update all `lex_*` methods to return the token instead of `Result` * Update `Lexer::new` to take a `start_offset` parameter * This will be used to start lexing from a specific offset * Add checkpoint - rewind logic for the lexer, f-strings and indentation stack * Remove `Iterator` implementation * Provide lookahead via checkpoint - rewind logic on the `Lexer` * Store all the lexed tokens * Update the soft keywords to identifier when parsing as an identifier * Add `bump_value` to bump the given token kind and return the corresponding owned value * Add `bump_any` to bump any token except for end of file [^1]: At the end, we'll remove the `Tok` completely
## Summary This PR adds the checkpoint logic to the parser and token source. It also updates the lexer checkpoint to contain the error position.
## Summary This PR updates various `bump` methods to allow parser to bump the soft keyword as a name token. This is required because the token vector should store the resolved token. Otherwise, the token vector and the AST are out of sync. The process is as follows: * One common method to bump the given kind for the parser which is `do_bump` * This calls in the `bump` method on the token source * The token source adds the given token kind and bump itself to the next non-trivia token * While doing this bump, it still adds the trivia tokens to the token vector The end result is that the parser informs the token source to add the given kind to the token vector and move on to the next token. Here, we can then introduce a `bump_soft_keyword_as_name` method which asserts that the current token is a soft keyword and bumps it as a name token instead. The `parse_identifier` method then calls the new method instead.
## Summary This PR fixes a bug where the parser would bump the token before updating the `prev_token_end`. The order should be reversed.
## Summary This PR updates the `type` alias statement parsing to use lookahead to resolve whether it's used as a keyword or as an identifier depending on the context. The strategy here is that the first token should be either a name or a soft keyword and the second can be either a `[` or `=`. Remember that `type type = int` is valid where the first `type` is a keyword while the second `type` is an identifier.
## Summary This PR adds support for parsing `match` statement based on whether `match` is used as a keyword or an identifier. The logic is as follows: 1. Use two token lookahead to classify the kind of `match` token based on the table down below 2. If we can't determine whether it's a keyword or an identifier, we'll use speculative parsing 3. Assume that `match` is a keyword and parse the subject expression 4. Then, based on certain heuristic, determine if our assumption is correct or not For (4), the heuristics are: 1. If the current token is `:`, then it's a keyword 2. If the current token is a newline, then 1. If the following two tokens are `Indent` and `Case`, it's a keyword 2. Otherwise, it's an identifier 3. Otherwise, it's an identifier In the above heuristic, the (2) condition is so that the parser can correctly identify the following as a `match` statement in case of a missing `:`: ```py match foo case _: pass ``` ### Classification table Here, the token is the one following the `match` token. We use two token lookahead for `not in` because using the `in` keyword we can classify it as an identifier or a keyword. Token | Keyword | Identifier | Either | -- | -- | -- | -- | Name | ✅ | | | Int | ✅ | | | Float | ✅ | | | Complex | ✅ | | | String | ✅ | | | FStringStart | ✅ | | | FStringMiddle | - | - | - | FStringEnd | - | - | - | IpyEscapeCommand | - | - | - | Newline | | ✅ | | Indent | - | - | - | Dedent | - | - | - | EndOfFile | | ✅ | | `?` | - | - | - | `!` | | ✅ | | `(` | | | ✅ | `)` | | ✅ | | `[` | | | ✅ | `]` | | ✅ | | `{` | ✅ | | | `}` | | ✅ | | `:` | | ✅ | | `,` | | ✅ | | `;` | | ✅ | | `.` | | ✅ | | `*` | | | ✅ | `+` | | | ✅ | `-` | | | ✅ | Other binary operators | | ✅ | | `~` | ✅ | | | `not` | ✅ | | | `not in` | | ✅ | | Boolean operators | | ✅ | | Comparison operators | | ✅ | | Augmented assign operators | | ✅ | | `...` | ✅ | | | `lambda` | ✅ | | | `await` | ✅ | | | `yield` | ✅ | | | Other keywords | | ✅ | | Soft keywords | ✅ | | | Singletons | ✅ | | |
## Summary This PR updates various checks to consider soft keywords. 1. Update the following methods to also check whether the parser is at a soft keyword token: 1. `at_simple_stmt` 2. `at_stmt` 3. `at_expr` 4. `at_pattern_start` 5. `at_mapping_pattern_start` 2. Update various references of `parser.at(TokenKind::Name)` to use `parser.at_name_or_soft_keyword`. In the future, we should update it to also include other keywords on a case-by-case basis. 3. Re-use `parse_name` and `parse_identifier` for literal pattern parsing. We should always use either of these methods instead of doing the same manually to make sure that soft keyword are correctly tranformed. For (i) and (ii), we could've just extended the `EXPR_SET` with the soft keyword tokens but I think it's better to be explicit about what the method checks and to have separation between the token set corresponding to statement and soft keywords. ## Test Plan Add a few test cases and update the snapshots. These snapshots were generated through the local fork of the parser which compiles.
## Summary This PR updates certain `TokenValue` enum variants to use struct fields instead of tuple variants. The main reason is to avoid a large diff for test snapshots, so it becomes easier to diagnose any issues. This is temporary and will be updated once everything is finalized.
## Summary This PR updates the lexer and parser tests with the new changes. Note that these snapshots were updated in a separate repository which isolated the parser crate along with it's dependencies.
This PR updates the parser API within the `ruff_python_parser` crate. It doesn't change any of the references in this PR. The final API looks like: ```rs pub fn parse_module(source: &str) -> Result<Program<ModModule>, ParseError> {} pub fn parse_expression(source: &str) -> Result<Program<ModExpression>, ParseError> {} pub fn parse_expression_range( source: &str, range: TextRange, ) -> Result<Program<ModExpression>, ParseError> {} pub fn parse(source: &str, mode: Mode) -> Result<Program<Mod>, ParseError> {} // Temporary. The `parse` will replace this function once we update the downstream // tools to work with programs containing syntax error. pub fn parse_unchecked(source: &str, mode: Mode) -> Program<Mod> {} ``` Following is a detailed list of changes: * Make `Program` generic over `T` which can be either `Mod` (enum), `ModModule` or `ModExpression` * Add helper methods to cast `Mod` into `ModModule` or `ModExpression` * Add helper method `Program::into_result` which converts a `Program<T>` into a `Result<Program<T>, ParseError>` where the `Err` variant contains the first `ParseError` * Update `TokenSource` to store the comment ranges * Parser crate depends on `ruff_python_trivia` because of `CommentRanges`. This struct could possibly be moved in the parser crate itself at the end * Move from `parse_expression_starts_at` to `parse_expression_range` which parses the source code at the given range using `Mode::Expression`. Unlike the `starts_at` variant, this accepts the entire source code * Remove all access to the `Lexer` * Remove all `parse_*` functions which works on the tokens provided by the caller The good news is that the tests in `ruff_python_parser` can be run. So, ``` cargo insta test --package ruff_python_parser ```
This PR updates the parser API references per the updated function signatures, especially the return type. Additionally, it exposes the `Program` to the linter by using the new API and also storing it on the `Checker` to be used by the rules. Most of the changes are in test functions. There are only a handful of usages in the non-test source code.
## Summary Part of #11401 This PR refactors most usages of `lex_starts_at` to use the `Tokens` struct available on the `Program`. This PR also introduces the following two APIs: 1. `count` (on `StringLiteralValue`) to return the number of string literal parts in the string expression 2. `after` (on `Tokens`) to return the token slice after the given `TextSize` offset ## Test Plan I don't really have a way to test this currently and so I'll have to wait until all changes are made so that the code compiles.
## Summary This PR replaces the usage of `lex_starts_at` in the formatter with the `Tokens` struct. This also updates the formatter API to take in the `Program`. Earlier, it would take the individual parts of the program but the formatter requires 3 fields from the program so we might as well pass the containing struct itself.
## Summary This PR implements the `TokenFlags` which will be stored on each `Token` and certain flags will be set depending on the token kind. Currently, it's equivalent to `AnyStringFlags` but it will help in the future to provide additional information regarding certain tokens like unterminated string, number kinds, etc. The main motivation to add a `TokenFlags` is to store certain information related to the token which will then be used by downstream tools. Currently, the information is only related to string tokens. The downstream tools should not be allowed access to the flags directly, it's an implementation detail. Instead, methods will be provided on `Token` to query certain information. An example can be seen in the follow-up PR (#11592). For example, the `Stylist` and `Indexer` uses the string flags stored on `String`/`FStringStart` token to get certain information. They will be updated to use these flags instead, thus removing the need for `Tok` completely. Prior art in TypeScript: https://github.com/microsoft/TypeScript/blob/16beff101ae1dae0600820ebf22632ac8a40cfc8/src/compiler/types.ts#L2788-L2827
## Summary This PR updates the usages of `CommentRanges` from the `Indexer` to the parsed output. First, the `CommentRanges` are now built during the parsing step. In the future, depending on the performance numbers, it could be updated to be done after the parsing step. Nevertheless, the struct will be owned by the parsed output. So, all references should now use the struct from the parsed output instead of the indexer. Now, the motivation to build `CommentRanges` while parsing is so that it can be shared between the linter and the formatter as both requires it. This also removes the need to have a dedicated method (`tokens_and_ranges`). There are two main updates: 1. The functions which only require `CommentRanges` are passed in the type directly 2. If the functions require other fields from `Program`, then the program is passed instead
This PR updates the `Stylist` and `Indexer` to get the information from `TokenFlags` instead of the owned token data. This removes the need for `Tok` completely. Both `Stylist` and `Indexer` are now constructed using the parsed program. This PR also removes the final few references to `Tok` and related structs. This means that clippy will now be useful and a follow-up PR will fix all the errors.
## Summary This PR updates the methods on `Tokens` struct to consider the "gap" between tokens. Additionally, it adds a constraint to the methods which is that the offsets shouldn't be within the token range otherwise it'll panic. ## Test Plan Add unit test cases.
## Summary This PR fixes various bugs found when running the test suite. Note that it doesn't update the code to make it compile which is done in a separate PR. Refer to review comments for each individual bug.
## Summary This PR fixes all the compilation error as raised by `clippy`. It doesn't look at the tests. Apart from that it also does the following: * Expose a `lex` method which creates the `Lexer` object. This is used in the benchmark crate. Ideally, we shouldn't expose the lexer and in the future we should move the lexer benchmark in the parser crate * Update the parser reference in `red_knot` crate * Change all references of `&[LexResult]` to `&Tokens` * Add `CommentRanges` to the `LinterResult` as it's required after the linter run. The `Program` is consumed because it requires an owned `ParseError`
## Summary This PR updates the lexer test snapshots to include the `TokenFlags`. ## Test Plan Verify the snapshots.
## Summary This PR fixes a bug to classify `match` as a keyword for: ```py match not foo: case _: ... ``` ## Test Plan This PR also adds a bunch of test cases around the classification of `match` token as a keyword or an identifier, including speculative parsing.
## Summary This PR renames `Program` to `Parsed` and shortens the `tokens_in_range` to `in_range`. The `Program` terminology is used in `red_knot` and this rename is to avoid the conflict. The `tokens_in_range` is usually used as `tokens.tokens_in_range` which suggests that the "tokens" word is repeated, so let's shorten it.
## Summary This PR is a follow-up to #11475 to use enum values where there's struct with a single field. The main motivation to use struct was to keep the snapshot update to a minimum which allows me to validate it. Now that the validation is done, we can revert it back. ## Test Plan Update the snapshots.
dhruvmanila
force-pushed
the
dhruv/parser-phase-2
branch
from
May 31, 2024 06:27
2d34e13
to
e8ab801
Compare
## Summary This PR fixes a bug to reset the cursor before emitting the `Dedent` token which had preceding whitespaces. Previously, we would just use `TextRange::empty(self.offset())` because the range for each token would be computed at the place where it was emitted. Now, the lexer handles this centrally in `next_token` which means we require special handling at certain places. This is one such case. Computing the range centrally in `next_token` has a benefit that the return type of lexer methods is a simple `TokenKind` but it does add a small complexity to handle such cases. I think it's worth doing this but if anyone thinks otherwise, feel free to comment. ## Test Plan Add a test case and update the snapshot.
dhruvmanila
requested review from
carljm,
MichaReiser and
AlexWaygood
as code owners
May 31, 2024 14:13
This was referenced May 31, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
This PR updates the entire parser stack in multiple ways:
Make the lexer lazy
Lexer
lazy #11244Previously, Ruff's lexer would act as an iterator. The parser would collect all the tokens in a vector first and then process the tokens to create the syntax tree.
The first task in this project is to update the entire parsing flow to make the lexer lazy. This includes the
Lexer
,TokenSource
, andParser
. For context, theTokenSource
is a wrapper around theLexer
to filter out the trivia tokens1. Now, the parser will ask the token source to get the next token and only then the lexer will continue and emit the token. This means that the lexer needs to be aware of the "current" token. When thenext_token
is called, the current token will be updated with the newly lexed token.The main motivation to make the lexer lazy is to allow re-lexing a token in a different context. This is going to be really useful to make the parser error resilience. For example, currently the emitted tokens remains the same even if the parser can recover from an unclosed parenthesis. This is important because the lexer emits a
NonLogicalNewline
in parenthesized context while a normalNewline
in non-parenthesized context. This different kinds of newline is also used to emit the indentation tokens which is important for the parser as it's used to determine the start and end of a block.Additionally, this allows us to implement the following functionalities:
SoftKeywordTransformer
and instead use lookahead or speculative parsing to determine whether a soft keyword is a keyword or an identifierTok
enum. TheTok
enum represents the tokens emitted by the lexer but it contains owned data which makes it expensive to clone. The newTokenKind
enum just represents the type of token which is very cheap.This brings up a question as to how will the parser get the owned value which was stored on
Tok
. This will be solved by introducing a newTokenValue
enum which only contains a subset of token kinds which has the owned value. This is stored on the lexer and is requested by the parser when it wants to process the data. For example:ruff/crates/ruff_python_parser/src/parser/expression.rs
Lines 1260 to 1262 in 8196720
Remove
SoftKeywordTransformer
type
soft keyword #11442match
statement #11443For context, https://github.com/RustPython/RustPython/pull/4519/files#diff-5de40045e78e794aa5ab0b8aacf531aa477daf826d31ca129467703855408220 added support for soft keywords in the parser which uses infinite lookahead to classify a soft keyword as a keyword or an identifier. This is a brilliant idea as it basically wraps the existing Lexer and works on top of it which means that the logic for lexing and re-lexing a soft keyword remains separate. The change here is to remove
SoftKeywordTransformer
and let the parser determine this based on context, lookahead and speculative parsing.match
token starts a compound statement while atype
token starts a simple statement. The parser already knows this.type
and `match soft keyword.match
soft keyword, there are certain cases for which we can't classify based on lookahead. The idea here is to create a checkpoint and keep parsing. Based on whether the parsing was successful and what tokens are ahead we can classify the remaining cases. Refer to Use speculative parsing formatch
statement #11443 for more details.If the soft keyword is being parsed in an identifier context, it'll be converted to an identifier and the emitted token will be updated as well. Refer
ruff/crates/ruff_python_parser/src/parser/expression.rs
Lines 487 to 491 in 8196720
The
case
soft keyword doesn't require any special handling because it'll be a keyword only in the context of a match statement.Update the parser API
Program
to linter #11505Now that the lexer is in sync with the parser, and the parser helps to determine whether a soft keyword is a keyword or an identifier, the lexer cannot be used on its own. The reason being that it's not sensitive to the context (which is correct). This means that the parser API needs to be updated to not allow any access to the lexer.
Previously, there were multiple ways to parse the source code:
Now that the lexer and parser are working together, the API corresponding to (2) cannot exists. The final API is mentioned in this PR description: #11494.
Refactor the downstream tools (linter and formatter)
lex_starts_at
withTokens
#11511lex_starts_at
withTokens
in the formatter #11515Tokens
#11529lex
usages #11562Stylist
,Indexer
to use tokens from parsed output #11592And, the final set of changes involves updating all references of the lexer and
Tok
enum. This was done in two-parts:W605
to the AST checker #11402PLE1300
andPLE1307
#11406TokenKind
indoc_lines_from_tokens
#11418TokenKind
in blank lines checker #11419TokenKind
#11420UP034
to useTokenKind
instead ofTok
#11424For (2), there were various strategies used:
Tokens
struct which wraps the token vector and add methods to query a certain subset of tokens. These includes:up_to_first_unknown
which replaces thetokenize
functionin_range
andafter
which replaces thelex_starts_at
function where the former returns the tokens within the given range while the latter returns all the tokens after the given offsetTokenFlags
which is a set of flags to query certain information from a token. Currently, this information is only limited to any string type token but can be expanded to include other information in the future as needed. ImplementTokenFlags
stored on eachToken
#11578CommentRanges
to the parsed output because this information is common to both the linter and the formatter. This removes the need fortokens_and_ranges
function.Test Plan
What's next?
The next step is to introduce re-lexing logic and update the parser to feed the recovery information to the lexer so that it can emit the correct token. This moves us one step closer to having error resilience in the parser and provides Ruff the possibility to lint even if the source code contains syntax errors.
Footnotes
Trivia tokens are
NonLogicalNewline
andComment
↩