The next section of this manual describes installation procedures for JFlex. If you never worked with JLex or just want to compare a JLex and a JFlex scanner specification you should also read Working with JFlex - an example (section 3). All options and the complete specification syntax are presented in Lexical specifications (section 4); Encodings, Platforms, and Unicode (section 5) provides information about scanning text vs. binary files. If you are interested in performance considerations and comparing JLex with JFlex speed, a few words on performance (section 7) might be just right for you. Those who want to use their old JLex specifications may want to check out section 8.1 Porting from JLex to avoid possible problems with not portable or non standard JLex behaviour that has been fixed in JFlex. Section 8.2 talks about porting scanners from the Unix tools lex and flex. Interfacing JFlex scanners with the LALR parser generators CUP and BYacc/J is explained in working together (section 9). Section 10 Bugs gives a list of currently known active bugs. The manual concludes with notes about Copying and License (section 11) and references.
C:\
, the following directory structure
should be generated:
C:\jflex-\ver\ +--bin\ (start scripts) +--doc\ (FAQ and manual) +--examples\ +--byaccj\ (calculator example for BYacc/J) +--cup\ (calculator example for cup) +--interpreter\ (interpreter example for cup) +--java\ (Java lexer specification) +--simple-maven\ (example scanner built with maven) +--standalone-maven\ (a simple standalone scanner, built with maven) +--lib\ (precompiled classes, skeleton files) +--src\ +--main\java\ +--jflex\ (source code of JFlex) +--jflex\gui (source code of JFlex UI classes) +--java_cup\runtime\ (source code of cup runtime classes) +--main\jflex\ (JFlex scanner spec) +--main\cup\ (JFlex parser spec) +--main\resources\ (messages and default skeleton file) +--test\ (unit tests)
bin\jflex.bat
(in the example it's C:\JFlex\bin\jflex.bat
)
such that
C:\java
) and
C:\JFlex
)
bin\
directory of JFlex in your path.
(the one that contains the start script, in the example: C:\JFlex\bin
).
To install JFlex on a Unix system, follow these two steps:
tar -C /usr/share -xvzf jflex-1.5.0.tar.gz
(The example is for site wide installation. You need to be root for that. User installation works exactly the same way--just choose a directory where you have write permission)
ln -s /usr/share/JFlex/bin/jflex /usr/bin/jflex
If the Java interpreter is not in your binary path, you need to supply its location in the script bin/jflex.
You can verify the integrity of the downloaded file with the MD5 checksum available on the JFlex download page. If you put the checksum file in the same directory as the archive, you run:
md5sum --check
jflex-1.5.0.tar.gz.md5
It should tell you
jflex-1.5.0.tar.gz: OK
jflex <options> <inputfiles>
It is also possible to skip the start script in bin/
and include the file lib/JFlex.jar
in your CLASSPATH environment variable instead.
Then you run JFlex with:
java jflex.Main <options> <inputfiles>
or with:
java -jar JFlex.jar <options> <inputfiles>
The input files and options are in both cases optional. If you don't provide a file name on the command line, JFlex will pop up a window to ask you for one.
JFlex knows about the following options:
-d <directory>
<directory>
--skel <file>
<file>
. This is mainly for JFlex
maintenance and special low level customisations. Use only when you
know what you are doing! JFlex comes with a skeleton file in the
src directory that reflects exactly the internal, pre-compiled
skeleton and can be used with the -skel option.
--nomin
--jlex
--dot
--dump
--legacydot
[^\n]
instead of
[^\n\r\u000B\u000C\u0085\u2028\u2029]
--noinputstreamctor
--verbose
or -v
--quiet
or -q
--time
--version
--info
--unicodever <ver>
<ver>
--pack
--table
--switch
--help
or -h
/* JFlex example: part of Java language lexer specification */ import java_cup.runtime.*; /** * This class is a simple example lexer. */ %%
%class Lexer %unicode %cup %line %column
%{ StringBuffer string = new StringBuffer(); private Symbol symbol(int type) { return new Symbol(type, yyline, yycolumn); } private Symbol symbol(int type, Object value) { return new Symbol(type, yyline, yycolumn, value); } %}
LineTerminator = \r|\n|\r\n InputCharacter = [^\r\n] WhiteSpace = {LineTerminator} | [ \t\f] /* comments */ Comment = {TraditionalComment} | {EndOfLineComment} | {DocumentationComment} TraditionalComment = "/*" [^*] ~"*/" | "/*" "*"+ "/" EndOfLineComment = "//" {InputCharacter}* {LineTerminator} DocumentationComment = "/**" {CommentContent} "*"+ "/" CommentContent = ( [^*] | \*+ [^/*] )* Identifier = [:jletter:] [:jletterdigit:]* DecIntegerLiteral = 0 | [1-9][0-9]*
%state STRING %%
/* keywords */ <YYINITIAL> "abstract" { return symbol(sym.ABSTRACT); } <YYINITIAL> "boolean" { return symbol(sym.BOOLEAN); } <YYINITIAL> "break" { return symbol(sym.BREAK); }
<YYINITIAL> { /* identifiers */ {Identifier} { return symbol(sym.IDENTIFIER); } /* literals */ {DecIntegerLiteral} { return symbol(sym.INTEGER_LITERAL); } \" { string.setLength(0); yybegin(STRING); } /* operators */ "=" { return symbol(sym.EQ); } "==" { return symbol(sym.EQEQ); } "+" { return symbol(sym.PLUS); } /* comments */ {Comment} { /* ignore */ } /* whitespace */ {WhiteSpace} { /* ignore */ } }
<STRING> { \" { yybegin(YYINITIAL); return symbol(sym.STRING_LITERAL, string.toString()); } [^\n\r\"\\]+ { string.append( yytext() ); } \\t { string.append('\t'); } \\n { string.append('\n'); } \\r { string.append('\r'); } \\\" { string.append('\"'); } \\ { string.append('\\'); } }
/* error fallback */ [^] { throw new Error("Illegal character <"+ yytext()+">"); }
From this specification JFlex generates a .java file with one class that contains code for the scanner. The class will have a constructor taking a java.io.Reader from which the input is read. The class will also have a function yylex() that runs the scanner and that can be used to get the next token from the input (in this example the function actually has the name next_token() because the specification uses the %cup switch).
As with JLex, the specification consists of three parts, divided by %%:
The code included in %{...%}
is copied verbatim into the generated lexer class source.
Here you can declare member variables and functions that are used
inside scanner actions. In our example we declare a StringBuffer ``string''
in which we will store parts of string literals and two helper functions
``symbol'' that create java_cup.runtime.Symbol objects
with position information of the current token (see section 9.1
JFlex and CUP
for how to interface with the parser generator CUP). As JFlex options, both
%{
and \%}
must begin a line.
The specification continues with macro declarations. Macros are abbreviations for regular expressions, used to make lexical specifications easier to read and understand. A macro declaration consists of a macro identifier followed by =, then followed by the regular expression it represents. This regular expression may itself contain macro usages. Although this allows a grammar like specification style, macros are still just abbreviations and not non terminals - they cannot be recursive or mutually recursive. Cycles in macro definitions are detected and reported at generation time by JFlex.
Here some of the example macros in more detail:
The last part of the second section in our lexical specification is a lexical state declaration: %state STRING declares a lexical state STRING that can be used in the ``lexical rules'' part of the specification. A state declaration is a line starting with %state followed by a space or comma separated list of state identifiers. There can be more than one line starting with %state.
{Identifier}
matches more of this input at once (i.e. it matches all of it)
than any other rule in the specification. If two regular expressions both
have the longest match for a certain input, the scanner chooses the action
of the expression that appears first in the specification. In that way, we
get for input "break" the keyword "break" and not an
Identifier "break".
Additional to regular expression matches, one can use lexical states to refine a specification. A lexical state acts like a start condition. If the scanner is in lexical state STRING, only expressions that are preceded by the start condition <STRING> can be matched. A start condition of a regular expression can contain more than one lexical state. It is then matched when the lexer is in any of these lexical states. The lexical state YYINITIAL is predefined and is also the state in which the lexer begins scanning. If a regular expression has no start conditions it is matched in all lexical states.
Since you often have a bunch of expressions with the same start conditions, JFlex allows the same abbreviation as the Unix tool flex:
<STRING> { expr1 { action1 } expr2 { action2 } }means that both expr1 and expr2 have start condition <STRING>.
The first three rules in our example demonstrate the syntax of a regular expression preceded by the start condition <YYINITIAL>.
<YYINITIAL> "abstract" {
return symbol(sym.ABSTRACT); }
matches the input "abstract" only if the scanner is in its start state "YYINITIAL". When the string "abstract" is matched, the scanner function returns the CUP symbol sym.ABSTRACT. If an action does not return a value, the scanning process is resumed immediately after executing the action.
The rules enclosed in
demonstrate the abbreviated syntax and are also only matched in state YYINITIAL.
Of these rules, one may be of special interest:
\" {
string.setLength(0); yybegin(STRING); }
If the scanner matches a double quote in state YYINITIAL we have recognised the start of a string literal. Therefore we clear our StringBuffer that will hold the content of this string literal and tell the scanner with yybegin(STRING) to switch into the lexical state STRING. Because we do not yet return a value to the parser, our scanner proceeds immediately.
In lexical state STRING another rule demonstrates how to refer to the input that has been matched:
[^\n\r\"]+ {
string.append( yytext() ); }
The expression [^\n\r\"]+
matches
all characters in the input up to the next backslash (indicating an
escape sequence such as \n
), double quote (indicating the end
of the string), or line terminator (which must not occur in a string literal).
The matched region of the input is referred to with yytext()
and appended to the content of the string literal parsed so far.
The last lexical rule in the example specification is used as an error fallback. It matches any character in any state that has not been matched by another rule. It doesn't conflict with any other rule because it has the least priority (because it's the last rule) and because it matches only one character (so it can't have longest match precedence over any other rule).
jflex java-lang.flex
UserCode
%%
Options and declarations
%%
Lexical rules
In all parts of the specification comments of the form /* comment text */ and the Java style end of line comments starting with // are permitted. JFlex comments do nest - so the number of /* and */ should be balanced.
Each JFlex directive must be situated at the beginning of a line and starts with the % character. Directives that have one or more parameters are described as follows:
%class "classname"
means that you start a line with %class followed by a space followed by the name of the class for the generated scanner (the double quotes are not to be entered, see the example specification in section 3).
Tells JFlex to give the generated class the name "classname" and to write the generated code to a file "classname.java". If the -d <directory> command line option is not used, the code will be written to the directory where the specification file resides. If no %class directive is present in the specification, the generated class will get the name "Yylex" and will be written to a file "Yylex.java". There should be only one %class directive in a specification.
Makes the generated class implement the specified interfaces. If more than one %implements directive is present, all the specified interfaces will be implemented.
Makes the generated class a subclass of the class ``classname''. There should be only one %extends directive in a specification.
Makes the generated class public (the class is only accessible in its own package by default).
Makes the generated class final.
Makes the generated class abstract.
Makes all generated methods and fields of the class
private. Exceptions are the constructor, user code in the
specification, and, if %cup
is present, the method
next_token. All occurrences of
" public " (one space character before and after public)
in the skeleton file are replaced by
" private " (even if a user-specified skeleton is used).
Access to the generated class is expected to be mediated by user class
code (see next switch).
%{
%}
The code enclosed in %{
and %}
is copied verbatim
into the generated class. Here you can define your own member variables
and functions in the generated scanner. Like all options, both %{
and %}
must start a line in the specification. If more than one
class code directive %{...%}
is present, the code is concatenated
in order of appearance in the specification.
%init{
%init}
The code enclosed in %init{
and %init}
is copied
verbatim into the constructor of the generated class. Here, member
variables declared in the %{...%}
directive can be initialised.
If more than one initialiser option is present, the code is concatenated
in order of appearance in the specification.
%initthrow{
%initthrow}
or (on a single line) just
%initthrow "exception1" [, "exception2", ...]
Causes the specified exceptions to be declared in the throws
clause of the constructor. If more than one %initthrow{
... %initthrow}
directive is present in the specification, all specified exceptions will
be declared.
Adds the specified argument to the constructors of the generated scanner.
If more than one such directive is present, the arguments are added in order
of occurrence in the specification. Note that this option conflicts with
the %standalone
and %debug
directives, because there is no
sensible default that can be created automatically for such parameters
in the generated main methods. JFlex will warn in this case and
generate an additional default constructor without these parameters and without user init code (which might potentially refer to the parameters).
Causes the generated scanner to throw an instance of the specified exception in case of an internal error (default is java.lang.Error). Note that this exception is only for internal scanner errors. With usual specifications it should never occur (i.e. if there is an error fallback rule in the specification and only the documented scanner API is used).
Set the initial size of the scan buffer to the specified value (decimal, in bytes). The default value is 16384.
Replaces the %include verbatim by the specified file. This feature is still experimental. It works, but error reporting can be strange if a syntax error occurs on the last token in the included file.
Causes the scanning method to get the specified name. If no %function directive is present in the specification, the scanning method gets the name ``yylex''. This directive overrides settings of the %cup switch. Please note that the default name of the scanning method with the %cup switch is next_token. Overriding this name might lead to the generated scanner being implicitly declared as abstract, because it does not provide the method next_token of the interface java_cup.runtime.Scanner. It is of course possible to provide a dummy implementation of that method in the class code section if you still want to override the function name.
Both cause the scanning method to be declared as of Java type int. Actions in the specification can then return int values as tokens. The default end of file value under this setting is YYEOF, which is a public static final int member of the generated class.
Causes the scanning method to be declared as of the Java wrapper type Integer. Actions in the specification can then return Integer values as tokens. The default end of file value under this setting is null.
Causes the scanning method to be declared as returning values of the specified type. Actions in the specification can then return values of typename as tokens. The default end of file value under this setting is null. If typename is not a subclass of java.lang.Object, you should specify another end of file value using the %eofval{ ... %eofval} directive or the <<EOF>> rule. The %type directive overrides settings of the %cup switch.
%yylexthrow{
%yylexthrow}
or (on a single line) just
%yylexthrow "exception1" [, "exception2", ...]
The exceptions listed inside %yylexthrow{
... %yylexthrow}
will be declared in the throws clause of the scanning method. If there is
more than one %yylexthrow{
... %yylexthrow}
clause in
the specification, all specified exceptions will be declared.
The default end of file value depends on the return type of the scanning method:
new java_cup.runtime.Symbol(sym.EOF)
User values and code to be executed at the end of file can be defined using these directives:
%eofval{
%eofval}
The code included in %eofval{
... %eofval}
will
be copied verbatim into the scanning method and will be executed each time
when the end of file is reached (this is possible when
the scanning method is called again after the end of file has been
reached). The code should return the value that indicates the end of
file to the parser. There should be only one %eofval{
... %eofval}
clause in the specification.
The %eofval{ ... %eofval}
directive overrides settings of the
%cup switch and %byaccj switch.
As of version 1.2 JFlex provides
a more readable way to specify the end of file value using the
<<EOF>> rule (see also section 4.3.2).
%eof{
%eof}
The code included in %{eof ... %eof}
will be executed
exactly once, when the end of file is reached. The code is included
inside a method void yy_do_eof() and should not return any
value (use %eofval{...%eofval}
or
<<EOF>> for this purpose). If more than one
end of file code directive is present, the code will be concatenated
in order of appearance in the specification.
%eofthrow{
%eofthrow}
or (on a single line) just
%eofthrow "exception1" [, "exception2", ...]
The exceptions listed inside %eofthrow{...%eofthrow}
will
be declared in the throws clause of the method yy_do_eof()
(see %eof for more on that method).
If there is more than one %eofthrow{...%eofthrow}
clause
in the specification, all specified exceptions will be declared.
Causes JFlex to close the input stream at the end of file. The code
yyclose() is appended to the method yy_do_eof()
(together with the code specified in %eof{...%eof}
) and
the exception java.io.IOException is declared in the throws
clause of this method (together with those of
%eofthrow{...%eofthrow}
)
Turns the effect of %eofclose off again (e.g. in case closing of input stream is not wanted after %cup).
Creates a main function in the generated class that expects the name of an input file on the command line and then runs the scanner on this input file by printing information about each returned token to the Java console until the end of file is reached. The information includes: line number (if line counting is enabled), column (if column counting is enabled), the matched text, and the executed action (with line number in the specification).
Creates a main function in the generated class that expects the name of an input file on the command line and then runs the scanner on this input file. The values returned by the scanner are ignored, but any unmatched text is printed to the Java console instead (as the C/C++ tool flex does, if run as standalone program). To avoid having to use an extra token class, the scanning method will be declared as having default type int, not YYtoken (if there isn't any other type explicitly specified). This is in most cases irrelevant, but could be useful to know when making another scanner standalone for some purpose. You should also consider using the %debug directive, if you just want to be able to run the scanner without a parser attached for testing etc.
The %cup directive enables the CUP compatibility mode and is equivalent to the following set of directives:
%implements java_cup.runtime.Scanner %function next_token %type java_cup.runtime.Symbol %eofval{ return new java_cup.runtime.Symbol(<CUPSYM>.EOF); %eofval} %eofclose
The value of <CUPSYM> defaults to sym and can be changed with the %cupsym directive. In JLex compatibility mode (-jlex switch on the command line), %eofclose will not be turned on.
Customises the name of the CUP generated class/interface containing the names of terminal tokens. Default is sym. The directive should not be used after %cup, but before.
Creates a main function in the generated class that expects the name of an input file on the command line and then runs the scanner on this input file. Prints line, column, matched text, and CUP symbol name for each returned token to standard out.
The %byacc directive enables the BYacc/J compatibility mode and is equivalent to the following set of directives:
%integer %eofval{ return 0; %eofval} %eofclose
The %switch code generation method is deprecated and will be removed in JFlex 1.6. With %switch JFlex will generate a scanner that has the DFA hard coded into a nested switch statement. This method gives a good deal of compression in terms of the size of the compiled .class file while still providing very good performance. If your scanner gets to big though (say more than about 200 states) performance may vastly degenerate and you should consider using one of the %table or %pack directives. If your scanner gets even bigger (about 300 states), the Java compiler javac could produce corrupted code, that will crash when executed or will give you an java.lang.VerifyError when checked by the virtual machine. This is due to the size limitation of 64 KB of Java methods as described in the Java Virtual Machine Specification [10]. In this case you will be forced to use the %pack directive, since %switch usually provides more compression of the DFA table than the %table directive.
The %table code generation method is deprecated and will be removed in JFlex 1.6. The %table direction causes JFlex to produce a classical table driven scanner that encodes its DFA table in an array. In this mode, JFlex only does a small amount of table compression (see [6], [12], [1] and [13] for more details on the matter of table compression) and uses the same method that JLex did up to version 1.2.1. See section 7 performance of this manual to compare these methods. The same reason as above (64 KB size limitation of methods) causes the same problem, when the scanner gets too big. This is, because the virtual machine treats static initialisers of arrays as normal methods. You will in this case again be forced to use the %pack directive to avoid the problem.
%pack causes JFlex to compress the generated DFA table and to store it in one or more string literals. JFlex takes care that the strings are not longer than permitted by the class file format. The strings have to be unpacked when the first scanner object is created and initialised. After unpacking, the DFA table is exactly the same as with option %table -- the only extra work to be done at runtime is the unpacking process which is fast (not noticeable in normal cases). It is in time complexity proportional to the size of the expanded DFA table, and it is static, i.e. it is done only once per scanner class -- no matter how often it is instantiated. Again, see section 7 performance on the performance of these scanners With %pack, there should be practically no limitation to the size of the scanner. %pack is the default setting and will be used when no code generation method is specified.
Causes the generated scanner to use an 7 bit input character set (character codes 0-127). If an input character with a code greater than 127 is encountered in an input at runtime, the scanner will throw an ArrayIndexOutofBoundsException. Not only because of this, you should consider using the %unicode directive. See also section 5 for information about character encodings. This is the default in JLex compatibility mode.
Both options cause the generated scanner to use an 8 bit input character set (character codes 0-255). If an input character with a code greater than 255 is encountered in an input at runtime, the scanner will throw an ArrayIndexOutofBoundsException. Note that even if your platform uses only one byte per character, the Unicode value of a character may still be greater than 255. If you are scanning text files, you should consider using the %unicode directive. See also section 5 for more information about character encodings.
Both options cause the generated scanner to use the BMP (Basic Multilingual Plane) of the Unicode input character set that Java supports natively (character code points 0-65535). JFlex 1.5.0does not yet support supplementary characters above the BMP; support is planned for JFlex 1.6. There will be no runtime overflow when using this set of input characters. %unicode does not mean that the scanner will read two bytes at a time. What is read and what constitutes a character depends on the runtime platform. See also section 5 for more information about character encodings. This is the default unless the JLex compatibility mode is used (command line option -jlex).
This option causes JFlex to handle all characters and strings in the specification as if they were specified in both uppercase and lowercase form. This enables an easy way to specify a scanner for a language with case insensitive keywords. The string "break" in a specification is for instance handled like the expression ([bB][rR][eE][aA][kK]). The %caseless option does not change the matched text and does not affect character classes. So [a] still only matches the character a and not A, too. Which letters are uppercase and which lowercase letters, is defined by the Unicode standard. In JLex compatibility mode (-jlex switch on the command line), %caseless and %ignorecase also affect character classes.
Turns character counting on. The int member variable yychar contains the number of characters (starting with 0) from the beginning of input to the beginning of the current token.
Turns line counting on. The int member variable yyline contains the number of lines (starting with 0) from the beginning of input to the beginning of the current token.
Turns column counting on. The int member variable yycolumn contains the number of characters (starting with 0) from the beginning of the current line to the beginning of the current token.
This JLex option is obsolete in JFlex but still recognised as valid directive.
It used to switch between Windows and Unix kind of line terminators (\r\n
and \n
) for the $ operator in regular expressions. JFlex
always recognises both styles of platform dependent line terminators.
This JLex option is obsolete in JFlex but still recognised as valid directive. In JLex it declares a public member constant YYEOF. JFlex declares it in any case.
%s[tate] "state identifier" [, "state identifier", ... ] for inclusive or
%x[state] "state identifier" [, "state identifier", ... ] for exclusive states
There may be more than one line of state declarations, each starting with %state or %xstate (the first character is sufficient, %s and %x works, too). State identifiers are letters followed by a sequence of letters, digits or underscores. State identifiers can be separated by white-space or comma.
The sequence
%state STATE1
%xstate STATE3, XYZ, STATE_10
%state ABC STATE5
declares the set of identifiers STATE1, STATE3, XYZ, STATE_10, ABC, STATE5 as lexical states, STATE1, ABC, STATE5 as inclusive, and STATE3, XYZ, STATE_10 as exclusive. See also section 4.3.3 on the way lexical states influence how the input is matched.
macroidentifier = regular expression
That means, a macro definition is a macro identifier (letter followed by a sequence of letters, digits or underscores), that can later be used to reference the macro, followed by optional white-space, followed by an "=", followed by optional white-space, followed by a regular expression (see section 4.3 lexical rules for more information about regular expressions).
The regular expression on the right hand side must be well formed and
must not contain the ^
, / or $ operators. Differently
to JLex, macros are not just pieces of text that are expanded by copying
- they are parsed and must be well formed.
This is a feature. It eliminates some very hard to find bugs in lexical specifications (such like not having parentheses around more complicated macros - which is not necessary with JFlex). See section 8.1 Porting from JLex for more details on the problems of JLex style macros.
Since it is allowed to have macro usages in macro definitions, it is possible to use a grammar like notation to specify the desired lexical structure. Macros however remain just abbreviations of the regular expressions they represent. They are not non terminals of a grammar and cannot be used recursively in any way. JFlex detects cycles in macro definitions and reports them at generation time. JFlex also warns you about macros that have been defined but never used in the ``lexical rules'' section of the specification.
The %include directive may be used in this section to include lexical rules from a separate file. The directive will be replaced verbatim by the contents of the specified file.
LexicalRules ::= (Include|Rule)+ Include ::= '%include' (' '|'\t'|'\b')+ File Rule ::= [StateList] ['^'] RegExp [LookAhead] Action | [StateList] '<<EOF>>' Action | StateGroup StateGroup ::= StateList '{' Rule+ '}' StateList ::= '<' Identifier (',' Identifier)* '>' LookAhead ::= '$' | '/' RegExp Action ::= '{' JavaCode '}' | '|' RegExp ::= RegExp '|' RegExp | RegExp RegExp | '(' RegExp ')' | ('!'|'~') RegExp | RegExp ('*'|'+'|'?') | RegExp "{" Number ["," Number] "}" | CharClass | PredefinedClass | MacroUsage | '"' StringCharacter+ '"' | Character CharClass ::= '[' ['^'] CharClassContent* ']' | '[' ['^'] CharClassContent+ CharClassOperator CharClassContent+ ']' CharClassContent ::= CharClass | Character | Character'-'Character | MacroUsage | PredefinedClass CharClassOperator ::= '||' | '&&' | '--' | '~~' MacroUsage ::= '{' Identifier '}' PredefinedClass ::= '[:jletter:]' | '[:jletterdigit:]' | '[:letter:]' | '[:digit:]' | '[:uppercase:]' | '[:lowercase:]' | '\d' | '\D' | '\s' | '\S' | '\w' | '\W' | '\p{' UnicodePropertySpec '}' | '\P{' UnicodePropertySpec '}' | '\R' | '.' UnicodePropertySpec ::= BinaryProperty | EnumeratedProperty (':' | '=') PropertyValue BinaryProperty ::= Identifier EnumeratedProperty ::= Identifier PropertyValue ::= Identifier
The grammar uses the following terminal symbols:
[a-zA-Z]
followed by a sequence of zero or more
letters, digits or underscores [a-zA-Z0-9_]
| ( ) { } [ ] < > \ . * + ? ^ $ / . " ~ !
\ "
\n
\r
\t
\f
\b
\x
followed by two hexadecimal digits [a-fA-F0-9] (denoting
a standard ASCII escape sequence),
\u
followed by four hexadecimal digits [a-fA-F0-9]
(denoting a unicode escape sequence),
Please note that the \n
escape sequence stands for the ASCII
LF character - not for the end of line. If you would like to match the
line terminator, you should use the expression \r|\n|\r\n
if you want
the Java conventions, or \r\n|[\r\n\u2028\u2029\u000B\u000C\u0085]
(provided as predefined class \R
) if you want to be fully Unicode
compliant (see also [5]).
As of version 1.1 of JFlex the white-space characters " "
(space) and "\t"
(tab) can be used to improve the readability of
regular expressions. They will be ignored by JFlex. In character
classes and strings however, white-space characters keep standing for
themselves (so the string " " still matches exactly one space
character and [ \n]
still matches an ASCII LF or a space
character).
JFlex applies the following standard operator precedences in regular expression (from highest to lowest):
'*', '+', '?', {n}, {n,m}
)
'!', '~'
)
RegExp::= RegExp '|' RegExp
)
So the expression a | abc | !cd*
for instance is parsed as
(a|(abc)) | ((!c)(d*))
.
A regular expression that consists solely of
'[...]'
matches any character in that class.
A Character is to be considered an element of a class, if it
is listed in the class or if its code lies within a listed character
range Character'-'Character or Macro or predefined character
class. So [a0-3\n]
for instance matches the characters
a 0 1 2 3 \n
If the list of characters is empty (i.e. just []
), the expression
matches nothing at all (the empty set), not even the empty string. This
may be useful in combination with the negation operator '!'
.
Character sets may be nested, e.g. [[[abc]d[e]]fg]
is equivalent
to [abcdefg]
.
Supported character set operations:
||
), e.g. [[a-c]||[d-f]]
, equivalent to
[a-cd-f]
: this is the default character set operation when
no operator is specified.
&&
), e.g. [[a-f]&&[f-m]]
, equivalent to
[f]
.
--
), e.g. [[a-z]--m]
, equivalent to
[a-ln-z]
.
~~
): the union of two classes minus their
intersection. For instance [\p{Letter}~~\p{ASCII}]
is equivalent
to
[[\p{Letter}||\p{ASCII}]--
[\p{Letter}&&\p{ASCII}]]
: the
set of characters that are present in either \p{Letter}
or
in \p{ASCII}
, but not in both.
'[^...]'
matches all characters not
listed in the class. If the list of characters is empty (i.e.
[^]
), the expression matches any character of the input
character set.
\
and
" lose their special meaning inside a string. See also the
%ignorecase switch.
'{' Identifier '}'
matches the input that is matched
by the right hand side of the macro with name "Identifier".
[:jletter:] isJavaIdentifierStart() [:jletterdigit:] isJavaIdentifierPart()
[:letter:] \p{Letter} [:digit:] \p{Digit} [:uppercase:] \p{Uppercase} [:lowercase:] \p{Lowercase}
\d \p{Digit} \D \P{Digit} \s \p{Whitespace} \S \P{Whitespace} \w [\p{Alpha}\p{Digit}\p{Mark} \p{Connector Punctuation}\p{Join Control}] \W [^\p{Alpha}\p{Digit}\p{Mark} \p{Connector Punctuation}\p{Join Control}]
To refer to a Unicode Property, use the \p{...}
syntax, e.g. the
Greek Block can be referred to as \p{Block:Greek}
. To match the
all characters not included in a property, use the \P{...}
syntax
(note that the 'P
' is uppercase), e.g. to match all characters
that are not letters: \P{Letter}
.
See UTS#18 [5] for a description of and links to definitions of some supported Properties. UnicodeSet [14] is an online utility to show the character sets corresponding to Unicode Properties and set operations on them, but only for the most recent Unicode version.
[^\r\n\u2028\u2029\u000B\u000C\u0085]
.
[^\n]
.
\R
matches any newline: \r\n|[\r\n\u2028\u2029\u000B\u000C\u0085]
.
If a and b are regular expressions, then
is the regular expression that matches all input matched by a or by b.
is the regular expression that matches the input matched by a followed by the input matched by b.
matches zero or more repetitions of the input matched by a
is equivalent to aa*
matches the empty input or the input matched by a
matches everything but the strings matched by a.
Use with care: the construction of !a
involves
an additional, possibly exponential NFA to DFA transformation
on the NFA for a. Note that
with negation and union you also have (by applying DeMorgan)
intersection and set difference: the intersection of
a and b is !(!a|!b)
, the expression
that matches everything of a not matched by b is
!(!a|b)
matches everything up to (and including) the first occurrence of a text
matched by a. The expression ~a
is equivalent
to !([^]* a [^]*) a
. A traditional C-style comment
is matched by "/*" ~"*/"
is equivalent to n times the concatenation of a.
So a{4}
for instance is equivalent to the expression a a a a.
The decimal integer n must be positive.
a{2,4}
for instance is equivalent
to the expression a a a? a?
. Both n and m are non
negative decimal integers and m must not be smaller than n.
In a lexical rule, a regular expression r may be preceded by a
'^
' (the beginning of line operator). r is then
only matched at the beginning of a line in the input. A line begins
after each occurrence of \r|\n|\r\n|\u2028|\u2029|\u000B|\u000C|\u0085
(see also [5]) and at the beginning of input.
The preceding line terminator in the input is not consumed and can
be matched by another rule.
In a lexical rule, a regular expression r may be followed by a
look-ahead expression. A look-ahead expression is either a '$'
(the end of line operator) or a '/'
followed by an arbitrary
regular expression. In both cases the look-ahead is not consumed and
not included in the matched text region, but it is considered
while determining which rule has the longest match (see also
4.3.3 How the input is matched).
In the '$' case r is only matched at the end of a line in
the input. The end of a line is denoted by the regular expression
\r|\n|\r\n|\u2028|\u2029|\u000B|\u000C|\u0085
.
So a$
is equivalent to a / \r|\n|\r\n|\u2028|\u2029|\u000B|\u000C|\u0085
.
This is different to the situation described in [5]:
since in JFlex $
is a true trailing context, the end of file
does not count as end of line.
For arbitrary look-ahead (also called trailing context) the expression is matched only when followed by input that matches the trailing context.
As of version 1.2, JFlex allows lex/flex style «EOF» rules in lexical specifications. A rule
[StateList] <<EOF>> { some action code }is very similar to the %eofval directive (section 4.2.3). The difference lies in the optional StateList that may precede the «EOF» rule. The action code will only be executed when the end of file is read and the scanner is currently in one of the lexical states listed in StateList. The same StateGroup (see section 4.3.3 How the input is matched) and precedence rules as in the ``normal'' rule case apply (i.e. if there is more than one «EOF» rule for a certain lexical state, the action of the one appearing earlier in the specification will be executed). «EOF» rules override settings of the %cup and %byaccj options and should not be mixed with the %eofval directive.
An Action consists either of a piece of Java code enclosed in
curly braces or is the special |
action. The |
action is
an abbreviation for the action of the following expression.
Example:
expression1 | expression2 | expression3 { some action }is equivalent to the expanded form
expression1 { some action } expression2 { some action } expression3 { some action }
They are useful when you work with trailing context expressions. The
expression a | (c / d) | b is not syntactically legal, but can
easily be expressed using the |
action:
a | c / d | b { some action }
Lexical states can be used to further restrict the set of regular expressions that match the current input.
Example:
%states A, B %xstates C %% expr1 { yybegin(A); action } <YYINITIAL, A> expr2 { action } <A> { expr3 { action } <B,C> expr4 { action } }The first line declares two (inclusive) lexical states A and B, the second line an exclusive lexical state C. The default (inclusive) state YYINITIAL is always implicitly there and doesn't need to be declared. The rule with expr1 has no states listed, and is thus matched in all states but the exclusive ones, i.e. A, B, and YYINITIAL. In its action, the scanner is switched to state A. The second rule expr2 can only match when the scanner is in state YYINITIAL or A. The rule expr3 can only be matched in state A and expr4 in states A, B, and C.
The generated class contains (among other things) the DFA tables, an input buffer, the lexical states of the specification, a constructor, and the scanning method with the user supplied actions.
The name of the class is by default Yylex, it is customisable with the %class directive (see also section 4.2.1). The input buffer of the lexer is connected with an input stream over the java.io.Reader object which is passed to the lexer in the generated constructor. If you want to provide your own constructor for the lexer, you should always call the generated one in it to initialise the input buffer. The input buffer should not be accessed directly, but only over the advertised API (see also section 4.3.5). Its internal implementation may change between releases or skeleton files without notice.
The main interface to the outside world is the generated scanning method (default name yylex, default return type Yytoken). Most of its aspects are customisable (name, return type, declared exceptions etc., see also section 4.2.2). If it is called, it will consume input until one of the expressions in the specification is matched or an error occurs. If an expression is matched, the corresponding action is executed. It may return a value of the specified return type (in which case the scanning method returns with this value), or if it doesn't return a value, the scanner resumes consuming input until the next expression is matched. If the end of file is reached, the scanner executes the EOF action, and (also upon each further call to the scanning method) returns the specified EOF value (see also section 4.2.3).
Currently, the API consists of the following methods and member fields:
A typical example for this are include files in style of the C pre-processor. The corresponding JFlex specification could look somewhat like this:
"#include" {FILE} { yypushStream(new FileReader(getFile(yytext()))); } .. <<EOF>> { if (yymoreStreams()) yypopStream(); else return EOF; }
This method is only available in the skeleton file skeleton.nested. You can find it in the src directory of the JFlex distribution.
This method is only available in the skeleton file skeleton.nested. You can find it in the src directory of the JFlex distribution.
This method is only available in the skeleton file skeleton.nested. You can find it in the src directory of the JFlex distribution.
String matched = yytext(); yypushback(1); return matched;will return the whole matched text, while
yypushback(1); return yytext();will return the matched text minus the last character.
This section tries to shed some light on the issues of Unicode and encodings, cross platform scanning, and how to deal with binary data. My thanks go to Stephen Ostermiller for his input on this topic.
Before we dive straight into details, let's take a look at what the problem is. The problem is Java's platform independence when you want to use it. For scanners the interesting part about platform independence is character encodings and how they are handled.
If a program reads a file from disk, it gets a stream of bytes. In earlier times, when the grass was green, and the world was much simpler, everybody knew that the byte value 65 is, of course, an A. It was no problem to see which bytes meant which characters (actually these times never existed, but anyway). The normal Latin alphabet only has 26 characters, so 7 bits or 128 distinct values should surely be enough to map them, even if you allow yourself the luxury of upper and lower case. Nowadays, things are different. The world suddenly grew much larger, and all kinds of people wanted all kinds of special characters, just because they use them in their language and writing. This is were the mess starts. Since the 128 distinct values were already filled up with other stuff, people began to use all 8 bits of the byte, and extended the byte/character mappings to fit their need, and of course everybody did it differently. Some people for instance may have said ``let's use the value 213 for the German character ä''. Others may have found that 213 should much rather mean é, because they didn't need German and wrote French instead. As long as you use your program and data files only on one platform, this is no problem, as all know what means what, and everything gets used consistently.
Now Java comes into play, and wants to run everywhere (once written, that is) and now there suddenly is a problem: how do I get the same program to say ä to a certain byte when it runs in Germany and maybe é when it runs in France? And also the other way around: when I want to say é on the screen, which byte value should I send to the operating system?
Java's solution to this is to use Unicode internally. Unicode aims to be a superset of all known character sets and is therefore a perfect base for encoding things that might get used all over the world. To make things work correctly, you still have to know where you are and how to map byte values to Unicode characters and vice versa, but the important thing is, that this mapping is at least possible (you can map Kanji characters to Unicode, but you cannot map them to ASCII or iso-latin-1).
Scanning text files is the standard application for scanners like JFlex. Therefore it should also be the most convenient one. Most times it is.
The following scenario works like a breeze: You work on a platform X, write your lexer specification there, can use any obscure Unicode character in it as you like, and compile the program. Your users work on any platform Y (possibly but not necessarily something different from X), they write their input files on Y and they run your program on Y. No problems.
Java does this as follows: If you want to read anything in Java that is supposed to contain text, you use a FileReader or some InputStream together with an InputStreamReader. InputStreams return the raw bytes, the InputStreamReader converts the bytes into Unicode characters with the platform's default encoding. If a text file is produced on the same platform, the platform's default encoding should do the mapping correctly. Since JFlex also uses readers and Unicode internally, this mechanism also works for the scanner specifications. If you write an A in your text editor and the editor uses the platform's encoding (say A is 65), then Java translates this into the logical Unicode A internally. If a user writes an A on a completely different platform (say A is 237 there), then Java also translates this into the logical Unicode A internally. Scanning is performed after that translation and both match.
Note that because of this mapping from bytes to characters, you should always
use the %unicode switch in you lexer specification if you want to scan
text files. %8bit may not be enough, even if
you know that your platform only uses one byte per character. The encoding
Cp1252 used on many Windows machines for instance knows 256 characters, but
the character ´ with Cp1252 code \x92
has the Unicode value \u2019
, which
is larger than 255 and which would make your scanner throw an
ArrayIndexOutOfBoundsException if it is encountered.
So for the usual case you don't have to do anything but use the %unicode switch in your lexer specification.
Things may break when you produce a text file on platform X and
consume it on a different platform Y. Let's say you have a file
written on a Windows PC using the encoding Cp1252. Then you move
this file to a Linux PC with encoding ISO 8859-1 and there you want
to run your scanner on it. Java now thinks the file is encoded
in ISO 8859-1 (the platform's default encoding) while it really is
encoded in Cp1252. For most characters
Cp1252 and ISO 8859-1 are the same, but for the byte values \x80
to \x9f
they disagree: ISO 8859-1 is undefined there. You can fix
the problem by telling Java explicitly which encoding to use. When
constructing the InputStreamReader, you can give the encoding
as argument. The line
Of course the encoding to use can also come from the data itself: for instance, when you scan an HTML page, it may have embedded information about its character encoding in the headers.
More information about encodings, which ones are supported, how they are called, and how to set them may be found in the official Java documentation in the chapter about internationalisation. The link http://docs.oracle.com/javase/1.5.0/docs/guide/intl/ leads to an online version of this for Oracle's JDK 1.5.
Scanning binaries is both easier and more difficult than scanning text files. It's easier because you want the raw bytes and not their meaning, i.e. you don't want any translation. It's more difficult because it's not so easy to get ``no translation'' when you use Java readers.
The problem (for binaries) is that JFlex scanners are
designed to work on text. Therefore the interface is
the Reader class (there is a constructor
for InputStream instances, but it's just there
for convenience and wraps an InputStreamReader
around it to get characters, not bytes).
You can still get a binary scanner when you write
your own custom InputStreamReader class that
does explicitly no translation, but just copies
byte values to character codes instead. It sounds
quite easy, and actually it is no big deal, but there
are a few little pitfalls on the way. In the scanner
specification you can only enter positive character
codes (for bytes that is \x00
to \xFF
). Java's byte type on the other hand
is a signed 8 bit integer (-128 to 127), so you have to convert
them properly in your custom Reader. Also, you should
take care when you write your lexer spec: if you
use text in there, it gets interpreted by an encoding
first, and what scanner you get as result might depend
on which platform you run JFlex on when you generate
the scanner (this is what you want for text, but for binaries it
gets in the way). If you are not sure, or if the development
platform might change, it's probably best to use character
code escapes in all places, since they don't change their
meaning.
This section gives details about JFlex 1.5.0's conformance with the requirements for Basic Unicode Support Level 1 given in UTS#18 [5].
To meet this requirement, an implementation shall supply a mechanism for specifying any Unicode code point (from U+0000 to U+10FFFF), using the hexadecimal code point representation.
JFlex does not fully conform: although syntax is provided to express
values across the whole range, via \uXXXX
, where XXXX
is
a 4-digit hex value, and \Uyyyyyy
, where yyyyyy
is a
6-digit hex value, JFlex only supports characters within the 16-bit
Basic Multilingual Plane, so the \Uyyyyyy
syntax is not usable.
To meet this requirement, an implementation shall provide at least a minimal list of properties, consisting of the following: General_Category, Script and Script_Extensions, Alphabetic, Uppercase, Lowercase, White_Space, Noncharacter_Code_Point, Default_Ignorable_Code_Point, ANY, ASCII, ASSIGNED.
The values for these properties must follow the Unicode definitions, and include the property and property value aliases from the UCD. Matching of Binary, Enumerated, Catalog, and Name values, must follow the Matching Rules from [UAX44].
JFlex conforms. The minimal set of properties is supported, as well as
a few others. To see the full list of supported properties, use the JFlex
command line option --uniprops <ver>
, where <ver>
is the
Unicode version. Loose matching is performed: case distinctions,
whitespace, underscores and hyphens in property names and values are
ignored.
To meet this requirement, an implementation shall provide the properties listed in Annex C: Compatibility Properties, with the property values as listed there. Such an implementation shall document whether it is using the Standard Recommendation or POSIX-compatible properties.
JFlex does not fully conform. The Standard Recommendation version of the
Annex C Compatibility Properties are provided, with two exceptions:
\X
Extended Grapheme Clusters; and \b
Default Word
Boundaries.
To meet this requirement, an implementation shall supply mechanisms for union, intersection and set-difference of Unicode sets.
JFlex conforms by providing these mechanisms, as well as symmetric difference.
To meet this requirement, an implementation shall extend the word boundary mechanism so that:
JFlex does not conform: \b
does not match simple word boundaries.
To meet this requirement, if an implementation provides for case-insensitive matching, then it shall provide at least the simple, default Unicode case-insensitive matching, and specify which properties are closed and which are not.
To meet this requirement, if an implementation provides for case conversions, then it shall provide at least the simple, default Unicode case folding.
JFlex conforms. All supported Unicode Properties are closed.
To meet this requirement, if an implementation provides for line-boundary testing, it shall recognize not only CRLF, LF, CR, but also NEL (U+0085), PARAGRAPH SEPARATOR (U+2029) and LINE SEPARATOR (U+2028).
JFlex conforms.
To meet this requirement, an implementation shall handle the full range of Unicode code points, including values from U+FFFF to U+10FFFF. In particular, where UTF-16 is used, a sequence consisting of a leading surrogate followed by a trailing surrogate shall be handled as a single code point in matching.
JFlex does not conform. Only code points in the Basic Multilingual Plane (BMP) are supported. Conformance to RL1.7 is planned for JFlex 1.6.
This section gives some tips on how to make your specification produce a faster scanner.
Although JFlex generated scanners show good performance without special optimisations, there are some heuristics that can make a lexical specification produce an even faster scanner. Those are (roughly in order of performance gain):
From the C/C++ flex [11] man page: ``Getting rid of backtracking is messy and often may be an enormous amount of work for a complicated scanner.'' Backtracking is introduced by the longest match rule and occurs for instance on this set of expressions:
"averylongkeyword"
.
With input "averylongjoke" the scanner has to read all characters up to 'j' to decide that rule . should be matched. All characters of "verylong" have to be read again for the next matching process. Backtracking can be avoided in general by adding error rules that match those error conditions
"av"|"ave"|"avery"|"averyl"|..
While this is impractical in most scanners, there is still the possibility to add a ``catch all'' rule for a lengthy list of keywords
"keyword1" { return symbol(KEYWORD1); } .. "keywordn" { return symbol(KEYWORDn); } [a-z]+ { error("not a keyword"); }Most programming language scanners already have a rule like this for some kind of variable length identifiers.
It costs multiple additional comparisons per input character and the matched text has to be re-scanned for counting. In most scanners it is possible to do the line counting in the specification by incrementing yyline each time a line terminator has been matched. Column counting could also be included in actions. This will be faster, but can in some cases become quite messy.
In the best case, the trailing context will first have to be read and
then (because it is not to be consumed) re-read again. The cases of
fixed-length look-ahead and fixed-length base expressions are handled efficiently
by matching the concatenation and then pushing back the required amount
of characters. This extends to the case of a disjunction of fixed-length
look-ahead expressions such as r1 / \r|\n|\r\n
. All other cases
r1 / r2
are handled by first scanning the concatenation of
r1
and r2
, and then finding the correct end of r1
.
The end of r1
is found by scanning forwards in the match again,
marking all possible r1
terminations, and then scanning the reverse
of r2
backwards from the end until a start of r2
intersects
with an end of r1
. This algorithm is linear in the size of the input
(not quadratic or worse as backtracking is), but about a factor of 2 slower
than normal scanning. It also consumes memory proportional to the size
of the matched input for r1 r2
.
^
'
It costs multiple additional comparisons per match. In some
cases one extra look-ahead character is needed (when the last character read is
\r
, the scanner has to read one character ahead to check if
the next one is an \n
or not).
One rule is matched in the innermost loop of the scanner. After each action some overhead for setting up the internal state of the scanner is necessary.
Note that writing more rules in a specification does not make the generated scanner slower (except when you have to switch to another code generation method because of the larger size).
The two main rules of optimisation apply also for lexical specifications:
Some of the performance tips above contradict a readable and compact specification style. When in doubt or when requirements are not or not yet fixed: don't use them -- the specification can always be optimised in a later state of the development process.
This works as expected on all well formed JLex specifications.
Since the statement above is somewhat absolute, let's take a look at what ``well formed'' means here. A JLex specification is well formed, when it
They are operators in JFlex while JLex treats them as normal
input characters. You can easily port such a JLex specification
to JFlex by replacing every ! with \!
and every
~
with \~
in all regular expressions.
This may sound a bit harsh, but could otherwise be a major problem - it can also help you find some disgusting bugs in your specification that didn't show up in the first place. In JLex, a right hand side of a macro is just a piece of text, that is copied to the point where the macro is used. With this, some weird kind of stuff like
macro1 = ("hello" macro2 = {macro1})*was possible (with macro2 expanding to
("hello")*
). This
is not allowed in JFlex and you will have to transform such
definitions. There are however some more subtle kinds of errors that
can be introduced by JLex macros. Let's consider a definition like
macro = a|b
and a usage like {macro}*
.
This expands in JLex to a|b*
and not to the probably intended
(a|b)*
.
JFlex uses always the second form of expansion, since this is the natural form of thinking about abbreviations for regular expressions.
Most specifications shouldn't suffer from this problem, because macros often only contain (harmless) character classes like alpha = [a-zA-Z] and more dangerous definitions like
ident = {alpha}({alpha}|{digit})*
are only used to write rules like
{ident} { .. action .. }
and not more complex expressions like
{ident}* { .. action .. }
where the kind of error presented above would show up.
Most of the C/C++ specific features are naturally not present in JFlex, but most ``clean'' lex/flex lexical specifications can be ported to JFlex without very much work.
This section is by far not complete and is based mainly on a survey of the flex man page and very little personal experience. If you do engage in any porting activity from lex/flex to JFlex and encounter problems, have better solutions for points presented here or have just some tips you would like to share, please do contact me. I will incorporate your experiences in this manual (with all due credit to you, of course).
definitions %% rules %% user code
The user code section usually contains some C code that is used
in actions of the rules part of the specification. For JFlex most
of this code will have to be included in the class code %{..%}
directive in the options and declarations section (after
translating the C code to Java, of course).
Macro definitions in flex have the form:
<identifier> <expression>To port them to JFlex macros, just insert a = between <identifier> and <expression>.
The syntax and semantics of regular expressions in flex are pretty much the
same as in JFlex. A little attention is needed for some escape sequences
present in flex (such as \a
) that are not supported in JFlex. These
escape sequences should be transformed into their octal or hexadecimal
equivalent.
Another point are predefined character classes. Flex offers the ones directly supported by C, JFlex offers the ones supported by Java. These classes will sometimes have to be listed manually (if there is need for this feature, it may be implemented in a future JFlex version).
^
' (beginning of line) and
'$
' (end of line) operators, consider the \n
character as only line terminator. This should usually not cause much problems, but you
should be prepared for occurrences of \r
or \r\n
or one of
the characters \u2028
, \u2029
, \u000B
, \u000C
,
or \u0085
. They are considered to be line terminators in Unicode and
therefore may not be consumed when
^
or $
is present in a rule.
If your generated lexer has the class name Scanner, the parser is started from the main program like this:
... try { parser p = new parser(new Scanner(new FileReader(fileName))); Object result = p.parse().value; } catch (Exception e) { ...
%eofval{
directive or by using
an «EOF» rule.
If your new symbol interface is called mysym for example, the corresponding code in the jflex specification would be either
%eofval{ return mysym.EOF; %eofval}
in the macro/directives section of the spec, or it would be
<<EOF>> { return mysym.EOF; }
in the rules section of your spec.
The main difference between the %cup switch in JFlex 1.2.1 and lower, and the current JFlex version is, that JFlex scanners now automatically implement the java_cup.runtime.Scanner interface. This means the scanning function changes its name from yylex() to next_token().
The main difference from older CUP versions to 0.10j is, that CUP now has a default constructor that accepts a java_cup.runtime.Scanner as argument and that uses this scanner as default (so no scan with code is necessary any more).
If you have an existing CUP specification, it will probably look somewhat like this:
parser code {: Lexer lexer; public parser (java.io.Reader input) { lexer = new Lexer(input); } :}; scan with {: return lexer.yylex(); :};
To upgrade to CUP 0.10j, you could change it to look like this:
parser code {: public parser (java.io.Reader input) { super(new Lexer(input)); } :};
If you do not mind to change the method that is calling the parser, you could remove the constructor entirely (and if there is nothing else in it, the whole parser code section as well, of course). The calling main procedure would then construct the parser as shown in the section above.
The JFlex specification does not need to be changed.
JFlex has built-in support for the Java extension BYacc/J [9] by Bob Jamison to the classical Berkeley Yacc parser generator. This section describes how to interface BYacc/J with JFlex. It builds on many helpful suggestions and comments from Larry Bell.
Since Yacc's architecture is a bit different from CUP's, the interface setup also works in a slightly different manner. BYacc/J expects a function int yylex() in the parser class that returns each next token. Semantic values are expected in a field yylval of type parserval where ``parser'' is the name of the generated parser class.
For a small calculator example, one could use a setup like the following on the JFlex side:
%% %byaccj %{ /* store a reference to the parser object */ private parser yyparser; /* constructor taking an additional parser object */ public Yylex(java.io.Reader r, parser yyparser) { this(r); this.yyparser = yyparser; } %} NUM = [0-9]+ ("." [0-9]+)? NL = \n | \r | \r\n %% /* operators */ "+" | .. "(" | ")" { return (int) yycharat(0); } /* newline */ {NL} { return parser.NL; } /* float */ {NUM} { yyparser.yylval = new parserval(Double.parseDouble(yytext())); return parser.NUM; }
The lexer expects a reference to the parser in its constructor. Since Yacc allows direct use of terminal characters like '+' in its specifications, we just return the character code for single char matches (e.g. the operators in the example). Symbolic token names are stored as public static int constants in the generated parser class. They are used as in the NL token above. Finally, for some tokens, a semantic value may have to be communicated to the parser. The NUM rule demonstrates that bit.
A matching BYacc/J parser specification could look like this:
%{ import java.io.*; %} %token NL /* newline */ %token <dval> NUM /* a number */ %type <dval> exp %left '-' '+' .. %right '^' /* exponentiation */ %% .. exp: NUM { $$ = $1; } | exp '+' exp { $$ = $1 + $3; } .. | exp '^' exp { $$ = Math.pow($1, $3); } | '(' exp ')' { $$ = $2; } ; %% /* a reference to the lexer object */ private Yylex lexer; /* interface to the lexer */ private int yylex () { int yyl_return = -1; try { yyl_return = lexer.yylex(); } catch (IOException e) { System.err.println("IO error :"+e); } return yyl_return; } /* error reporting */ public void yyerror (String error) { System.err.println ("Error: " + error); } /* lexer is created in the constructor */ public parser(Reader r) { lexer = new Yylex(r, this); } /* that's how you use the parser */ public static void main(String args[]) throws IOException { parser yyparser = new parser(new FileReader(args[0])); yyparser.yyparse(); }
Here, the customised part is mostly in the user code section: We create the lexer in the constructor of the parser and store a reference to it for later use in the parser's int yylex() method. This yylex in the parser only calls int yylex() of the generated lexer and passes the result on. If something goes wrong, it returns -1 to indicate an error.
Runnable versions of the specifications above are located in the examples/byaccj directory of the JFlex distribution.
Please use the bugs section of the JFlex web site to check for open issues.
There is absolutely NO WARRANTY for JFlex, its code and its documentation.
See the file COPYRIGHT for more information.